\text { (a) } \nabla \cdot B =0, \nabla \times B =\mu_{0} J , \text { and } \nabla \cdot A =0, \nabla \times A = B \Rightarrow A =\frac{\mu_{0}}{4 \pi} \int \frac{ J }{ᴫ} d \tau^{\prime} , so
\nabla \cdot A =0, \nabla \times A = B , \text { and } \nabla \cdot W =0 \text { (we'll choose it so), } \nabla \times W = A \Rightarrow W =\frac{1}{4 \pi} \int \frac{ B }{ᴫ} d \tau^{\prime} .
(b) W will be proportional to B and to two factors of r (since differentiating twice must recover B), so I’ll try something of the form W =\alpha r ( r \cdot B )+\beta r^{2} B , and see if I can pick the constants \alpha \text { and } \beta in such a way that \nabla \cdot W =0 \text { and } \nabla \times W = A .
\nabla \cdot W =\alpha[( r \cdot B )( \nabla \cdot r )+ r \cdot \nabla ( r \cdot B )]+\beta\left[r^{2}( \nabla \cdot B )+ B \cdot \nabla \left(r^{2}\right)\right] . \nabla r =\frac{\partial x}{\partial x}+\frac{\partial y}{\partial y}+\frac{\partial z}{\partial z}=1+1+1=3 ;
\nabla ( r \cdot B )= r \times( \nabla \times B )+ B \times( \nabla \times r )+( r \cdot \nabla ) B +( B \cdot \nabla ) r ; but B is constant, so all derivatives of B vanish, and ∇ × r = 0 (Prob. 1.63), so
\nabla ( r \cdot B )=( B \cdot \nabla ) r =\left(B_{x} \frac{\partial}{\partial x}+B_{y} \frac{\partial}{\partial y}+B_{z} \frac{\partial}{\partial z}\right)(x \hat{ x }+y \hat{ y }+z \hat{ z })=B_{x} \hat{ x }+B_{y} \hat{ y }+B_{z} \hat{ z }= B ;
\nabla \left(r^{2}\right)=\left(\hat{ x } \frac{\partial}{\partial x}+\hat{ y } \frac{\partial}{\partial y}+\hat{ z } \frac{\partial}{\partial z}\right)\left(x^{2}+y^{2}+z^{2}\right)=2 x \hat{ x }+2 y \hat{ y }+2 z \hat{ z }=2 r . So
\nabla \cdot W =\alpha[3( r \cdot B )+( r \cdot B )]+\beta[0+2( r \cdot B )]=2( r \cdot B )(2 \alpha+\beta), \text { which is zero if } 2 \alpha+\beta=0 .
\nabla \times W =\alpha[( r \cdot B )( \nabla \times r )- r \times \nabla ( r \cdot B )]+\beta\left[r^{2}( \nabla \times B )- B \times \nabla \left(r^{2}\right)\right]=\alpha[0-( r \times B )]+\beta[0-2( B \times r )]=-( r \times B )(\alpha-2 \beta)=-\frac{1}{2}( r \times B ) (Prob. 5.25). So we want \alpha-2 \beta=1 / 2 . Evidently \alpha-2(-2 \alpha)=5 \alpha=1 / 2 ,
\text { or } \alpha=1 / 10 ; \beta=-2 \alpha=-1 / 5 . Conclusion: W =\frac{1}{10}\left[ r ( r \cdot B )-2 r^{2} B \right] . (But this is certainly not unique.)
\text { (c) } \nabla \times W = A \Rightarrow \int( \nabla \times W ) \cdot d a =\int A \cdot d a . \text { Or } \oint W \cdot d l =\int A \cdot d a . Integrate around the amperian loop shown, taking W to point parallel to the axis, and choosing W = 0 on the axis:
-W l=\int_{0}^{s}\left(\frac{\mu_{0} n I}{2}\right) l \bar{s} d \bar{s}=\frac{\mu_{0} n I}{2} \frac{s^{2} l}{2} (using Eq. 5.72 for A).
A =\frac{\mu_{0} n I}{2} s \hat{\phi}, \quad \text { for } s \leq R (5.72)
W =-\frac{\mu_{0} n I s^{2}}{4} \hat{ z } (s < R).
\text { For } s>R,-W l=\frac{\mu_{0} n I R^{2} l}{4}+\int_{R}^{s}\left(\frac{\mu_{0} n I}{2}\right) \frac{R^{2}}{\bar{s}} l d \bar{s}=\frac{\mu_{0} n I R^{2} l}{4}+\frac{\mu_{0} n I R^{2} l}{2} \ln (s / R) ;
W =-\frac{\mu_{0} n I R^{2}}{4}[1+2 \ln (s / R)] \hat{ z } (s > R).