Associate Professor Mathematics and Statistics
 
Introduction to the Harmonic Measure

Introduction to the Harmonic Measure

Harmonic measure

Prelude : Reaction-Diffusion-Convection processes

The general reaction-diffusion-convection Partial Differential equation in $\mathbb{R}^n$ is of the form $$ u_t = \textrm{div}(K\textrm{grad } u)-\textrm{div}(\mu V)+f$$ where $u = u(x,t)$, $(x,t)\in \mathbb{R}^n\times [0,\infty)$, $K\ge0$ and $\mu$ are in general functions of $u$, $t$, and $x$ and $V$ is a vector field with the same dependance. This equation is derived by modeling the evolution of the density $u$ of a substance in space and time. The diffusion term $\textrm{div}(K\textrm{grad } u)$ states that the substance moves from places with greater concentration towards places with less concentration with diffusion coefficient $K\ge0$. The convection term $\mu V$ represents the change in concentration in time due to the movement of the substance within the medium containing it; the vector field $V$ models the movement of this medium. Finally the source term $f$ represents the creation or destruction of substance (source or sink) due to some chemical or physical reaction. For a detailed derivation of this equation sees, for example Chicone’s “ODE with applications”.

One of the simplest cases arises when $K=k^2$ is a nonnegative constant, and $\mu =\gamma u$ is a constant multiple of $u$ for some fixed $\gamma\in\mathbb{R}$. Moreover, we also impose $\textrm{div }V=0$, which in physical terms means the the media in which the process is taking place is an incompressible fluid. Hence, the PDE is now

$$u_t = k^2\Delta u-\gamma\textrm{grad}(u)\cdot V+f.$$

The first term on the right is called the diffusion term, the gradient term is the convection term, and $f$ is the source term. Here $\Delta$ denotes the Laplacian operator $$\Delta u = \frac{\partial^2 u}{\partial x_1^2}+\cdots+\frac{\partial^2 u}{\partial x_n^2}.$$

Diffusion processes

In the absence of convection and source terms, we are left with the heat equation: $$u_t = k^2\Delta u.$$ If we assume an intial temperature distribution $u_0(x)$ in $\mathbb{R}^n$, then a solution to the initial value problem $u_t = k^2\Delta u$ in $\mathbb{R}^n\times(0,\infty)$, $u(x,0)=u_0(x)$ in $\mathbb{R}^n$ would tell us how temperature diffuses as time increases. However, this formulation of the the initial value problem (IVP) is missing a consition on the boundary of the domain, namely, at infinity. Instead of considering the problem in the whole plane, we will consider an open subset $\Omega$ of $\mathbb{R}^n$ with some nice enough boundary $\partial\Omega$. For example, we will consider the half space $\Omega = \mathbb{R}^n_+= \{ x\in\mathbb{R}^n:x_n>0\}$, in which case $\partial\Omega$ is the hyperplane $\Gamma = \{x:x_n=0\}$ or we may consider the unit ball $\Omega = B_1 = \{ x\in\mathbb{R}^n:|x|<1\}$ for which the boundary is the unit sphere $\partial\Omega = S^{n-1} \{ x\in\mathbb{R}^n:|x|=1\} $. Assuming the $u$ is measuring heat density (temperature), we will consider two different physical scenarios that will determine the dynamics at the boundary: Dirichlet conditions or Neumann conditions. In the first case we assume $u$ has an intitial temperature distribution $u_0$ on $\Omega$, we do not insulate the boundary and we maintain it at a constant temperature distribution $g$ across time on the boundary, we obtain the Dirichlet problem:

$$\begin{equation*} \left\{ \begin{array}{ll} u_{t}=k^{2}\Delta u & \left( x,t\right) \in \Omega \times \left( 0,\infty \right) \\ u(x,0) = u_0(x) & x\in\Omega \\ u\left( x,t\right) =g\left( x\right) & (x,t)\in \partial \Omega \times (0,\infty) \end{array}% \right. \qquad \left( D_t\right) \end{equation*} $$

In the second case we also assume an initial temperature distribution $u_0$ but now we maintain the boundary insulated, no heat can come in or go out through it. This is imposed by asking that the outward normal derivative of $u$ must vanish at the boundary. Let $\nu$ denote the outward unit normal to the boundary $\partial\Omega$, then the normal derivative of $u$ at the boundary is $u_\nu = \frac{\partial u}{\partial\nu}=\textrm{grad}(u)\cdot\nu$. This is the Neumann problem:

$$\begin{equation*} \left\{ \begin{array}{ll} u_{t}=k^{2}\Delta u & \left( x,t\right) \in \Omega \times \left( 0,\infty \right) \\ u(x,0) = u_0(x) & x\in\Omega \\ u_\nu\left( x,t\right) =0 & (x,t)\in \partial \Omega \times (0,\infty) \end{array}% \right. \qquad \left( N_t\right) \end{equation*} $$

Both of the above processes are stable in the sense that as time tends to infinity solutions converge to the “steady state solutions” of the corresponding problems for Laplace’s equation:

$$\begin{equation*} \left\{ \begin{array}{ll} \Delta u = 0 & x \in \Omega \\ u\left( x\right) =g\left( x\right) & x\in \partial \Omega \end{array}% \right. \qquad \left( D\right) \end{equation*} $$

and

$$\begin{equation*} \left\{ \begin{array}{ll} \Delta u = 0 & x \in \Omega \\ u_\nu\left( x\right) =0 & x\in \partial \Omega \\ \int_\Omega u(x)\, dx = c_0 & x\in\Omega \end{array}% \right. \qquad \left( N\right) \end{equation*} $$

Here the integral condition is added to obtain uniqueness. In fact this problem is not too interesting since the solution is just $u\equiv C= c_0|\Omega|$. This corresponds to an initial heat distribution with “total mass” $C$; as we let diffusion take place the tempearture converges to a constant. Since no heat can come in or go out, this constant must be the same as the total heat mass at the initial time. Of course, there are boundary value problemsn with Neumann boundary conditions which are not trivial, but we will mainly concern ourselves with the Dirichlet problem, since we are interested in defining and understanding the concept of harmonic measure.

Introduction to harmonic measure

One dimensional case

If we consider the Dirichlet problem $(D)$ in one dimension, and we let $\Omega$ be an open interval $I=(a,b)$, with $-\infty<a<b<\infty$, we end up with the second order differential equation in one variable $u”(x)=0$ for $x$ in $I$, and boundary values $u(a) = u_0\in\mathbb{R},\ u(b)=u_1\in\mathbb{R}$. Assume that the soution $u$ is twice continuously differentiable in $I$, then the Fundamental Theorem of Calculus tells us that

$$ u'(x) = u'(a)+\int_a^x u”(t)\,dt=u'(a)+\int_a^x 0\,dt=u'(a).$$

Hence, $u'(x)\equiv u'(a):=u’$, i.e. $u’$ is contant in $I$. Applying the FTC again we obtain

$$ u(x) = u(a)+\int_a^x u'(t)dt = u_0+\int_a^x u’\,dt = u_0+(x-a)u’.$$

Since $u(b) = u_1$, we have that

$$ u_0+(b-a)u’=u_1\quad\Rightarrow\quad u’=\frac{u_1-u_0}{b-a}.$$

Finally, we obtain the solution $u(x)=u_0+\frac{u_1-u_0}{b-a}(x-a)$. Note that the solution $u$ only depends on the values at the boundary of $I$ and on the length of the interval $I$. We have

$$ u(x) = u_0 \frac{b-x}{b-a}+u_1\frac{x-a}{b-a}.$$

Using some basic measure theory we could write this solution as follows:

$$u(x)=\int_{\partial I}g(t)\,d\omega^x(t),$$

where $\partial I=\{ a,b\}$ is the boundary of $I$, $g$ is the boundary function $g(a)=u_0,\, g(b)=u_1$, and $\omega^x(t)$ is the measure on the boundary given by

$$\omega^x(t)=\delta_a(t)\frac{b-x}{b-a}+\delta_b(t)\frac{x-a}{b-a}.$$

The measures $\omega^x$, which depend on the interior points $x$, are actually probability measures because they are nonnegative and with have total measure $1$:

$$\int_{\partial I}1\,d\omega^x(t)=\frac{b-x}{b-a}+\frac{x-a}{b-a}=1.$$

We have seen that this family of measures provide the solutions to the Dirichlet problem

$$\begin{equation*} \left\{ \begin{array}{ll} u^{\prime \prime }(x)=0 & \text{in }I \\ u\left( x\right) =g\left( x\right) \quad & \text{on }\partial I% \end{array}% \right. \end{equation*} $$

The family of measures $\omega^x$ is what is known as “the harmonic measure”. Yes, even though technically this is a family of measures, we just call them “the harmonic measure”.

The measure space of the harmonic measure is $(\partial I,\mathcal{B})$, where $\mathcal{B}$ is the Borel $\sigma$-algebra of $\partial I$. Since in this case the boundary is a finite set, we can explicitly write

$$\mathcal{B}=\{\emptyset, \{a\},\{b\},\{a,b\}\}.$$

If $x,y\in I$ then clearly $\omega^x(A)=\omega^y(A)$ if $A=\emptyset$ or $A=\partial I$. On the other hand,

$$\frac{\omega^x(\{a\})}{\omega^y(\{a\})}=\frac{\frac{b-x}{b-a}}{\frac{b-y}{b-a}}=\frac{b-x}{b-y}\qquad\text{and}\qquad \frac{\omega^x(\{b\})}{\omega^y(\{b\})}=\frac{\frac{x-a}{b-a}}{\frac{y-a}{b-a}}=\frac{x-a}{y-a}.$$

The quantities above are bounded above and bounded away from zero with a constant depending only on the distance of the points $x$ and $y$ to the boundary $\partial I$. That is, suppose $0<\delta<(b-a)/2$, then if $x,y\in[a+\delta,b-\delta]$ it follows that

$$\frac{2\delta}{b-a}\le\frac{\omega^x(A)}{\omega^y(A)}\le\frac{b-a}{2\delta}\quad\text{ for all }A\in\mathcal{B}.$$

This says that the harmonic measures are all absolutely continuous with respect to each other, with constants which are uniform in compact subsets of $(a,b)$.

Higher dimensions

Of course, for the one dimensional problem there is no need for the sofistication of boundary measures to solve the problem, which can be tackle with basic calculus. However, this context of harmonic measure solutions can easily be extended to higher dimensions.

In general, for dimensions $n>1$ we may consider domains $\Omega$ given by connected open sets $\Omega\subset \mathbb{R}^n$. However to treat the Dirichlet problem in this level of generality may be challenging, so we will restrict ourselves to really nice domains with very nice boundaries, namely, the half space $\mathbb{R}^n_+$ or the unit ball $B_1$. Recal the Dirichlet problem $(D)$

$$\begin{equation} \left\{ \begin{array}{ll} \Delta u(x)=0 & \text{in }\Omega \\ u\left( x\right) =g\left( x\right) \quad & \text{on }\partial \Omega% \end{array}% \right. \qquad (D) \end{equation} $$

We could ask whether there exists a harmonic measure $\omega^x$, for all $x\in\Omega$ that could act as the building blocks for solutions. That is, we could ask under what circumstances we could assure the existence of a family of probability measures $\omega^x,\, x\in\Omega$, such that

$$u(x)=\int_{\partial\Omega}g(t)\, d\omega^x(t) $$

is the solution to the Dirihlet problem above.

Existence of the harmonic measure

In Chapter 2 of [GT] Gilbarg and Trudinger’s “Elliptic Partial Differential Equations of Second Order” there is quick introduction to existence of solutions to the Dirichlet problem in the most general domain in $\mathbb{R}^n$. For our purposes, we will be happy to assure ourselves that if the domanin $\Omega$ in $\mathbb{R}^n$ has a nice enough boundary, for example, if the boundary of $\Omega$ is locally Lipschitz, then the Dirichlet problem $D$ has a unique solution for all continuous boundary values. We now give a precise definition of how we can quantify the smoothness of the boundary of the domain:

Definition 1 We say that an open set $\Omega \subset \mathbb{R}^{n}$ has boundary of class $C^{k}$, $k\geq 0$, if for every point $p\in \partial \Omega $ there exists a neighborhood $U$ of $p$ and a diffeomorshism $\Phi $ of class $C^{k} $, $\Phi :U\rightarrow V$ in $\mathbb{R}^{n}$ (i.e. then inverse $\Phi ^{-1}$ exists and $\Phi ^{-1}:V\rightarrow U$ is also of class $C^{k}$) such that

$$\begin{eqnarray*} \Phi \left( \partial \Omega \bigcap U\right) &=&\left\{ y\in \mathbb{R}% ^{n},y_{n}=0\right\} \bigcap V\qquad \text{and} \\ \Phi \left( \Omega \bigcap U\right) &=&\left\{ y\in \mathbb{R}% ^{n},y_{n}>0\right\} \bigcap V. \end{eqnarray*}% $$

We say that $\Omega $ has a Lipschitz boundary if the above diffeomeorphism is of class $C^{0,1}$ (Lipschitz).

We then have the following existence and uniqueness theorem (see [GT])

Theorem 1 (Existence and uniqueness of solutions of the Dirichlet problem) Suppose $\Omega $ is an open subset of $\mathbb{R}^{n}$ with Lipschitz boundary $\partial \Omega $. Then for every continuous function $g$ on $% \partial \Omega $ there exists a unique $u\in C^{2}\left( \Omega \right) \bigcap C^{0}\left( \overline{\Omega }\right) $ such that

$$\begin{equation*} \left\{ \begin{array}{ll} \Delta u=0\qquad & \text{in }\Omega \\ u=g & \text{on }\partial \Omega \end{array}% \right. \qquad (D) \end{equation*} $$

The above theorem tells us that the Dirichlet problem always has a unique solution when the boundary data is continuous, and the boundary is smooth enough. To deduce from here the exsistence of the harmonic measure we still need two more fundamental results: the maximum principle and Riesz representation theorem.
As a consequence of the weak maximum principle (see [GT] ch.8), we have the following:

Theorem 2 (The maximum principle) If $u\in C^{2}\left( \Omega \right) \bigcap C^{0}\left( \overline{\Omega }\right) $ is a solution of the Dirichlet problem in a domain $\Omega$, then we have that $$\begin{equation*} \max_{x\in \Omega }\left\vert u\left( x\right) \right\vert \leq \max_{x\in \partial \Omega }\left\vert u\left( x\right) \right\vert \end{equation*} $$

Theorem 3 (Riesz representation theorem) Let $X$ be a locally compact Hausdorff space. For any positive linear functional $\psi$ on $C_c(X)$, there is a unique regular Borel measure $\mu$ on $X$ such that $$ \forall f\in C_{c}(X):\qquad \psi (f)=\int _{X}f(x)\,d\mu (x).$$

So, puttings Theorems 1,2, and 3 together we can prove the existence and uniqueness of the harmonic measure.

Theorem 4 (existence and uniqueness of the harmonic measure) Let $\Omega$ be a Lipschitz domain in $\mathbb{R}^n$, then for all $x\in\Omega$ there exists a unique regular Borel measure $\omega^x$ on $\partial\Omega$ such that for all continuous functions $g$ on $\partial\Omega$ the unique solution $u$ of the Dirichlet problem in $\Omega$ with boundary data $g$ is given by $$u(x) = \int _{\partial\Omega}g(s)\,d\omega^x(s).$$

Proof The result is an immediate consequence of theorems 1,2, and 3. Indeed, let $g\in C(\partial\Omega)$ and let $u$ be the unique $u\in C^{2}\left( \Omega \right)\bigcap C^{0}\left( \overline{\Omega }\right) $ that solves the Dirichlet problem (D) with boundary data $g$ as guaranteed by Theorem 1. For each fixed $x\in\Omega$ we define the mapping $g\mapsto u(x)$, i.e. for each $g\in C(\partial\Omega)$ and $x\in\Omega$ we define $$\psi_x(g) = u(x),\qquad\textrm{ where }u\textrm{ is the unique solution to (D) with boundary value } g.$$ Then $\psi_x$ is a mapping from $C(\partial\Omega)$ into $\mathbb{R}$. Moreover, we claim that $\psi_x$ is a linear functional on $C(\partial\Omega)$, i.e. $\psi_x$ is linear and bounded.

To prove that is linear let $g,h\in C(\partial\Omega)$ and $a,b\in\mathbb{R}$, then since the Laplacian operator $\Delta$ is linear, we have that if $u$ is the solution to the Dirihlet problem (D) in $\Omega$ with boundary data $g$ and $v$ is the solution to the Dirichlet problem in $\Omega$ with boundary data $h$, then $w=au+bv$ is the solution to the Dirichlet problem in $\Omega$ with boundary data $ag+bh$. That is: $$\Delta u=0,\ \Delta v = 0\ \textrm{ in }\Omega\textrm{ and } u=g,\,v=h\textrm{ on }\partial\Omega\quad\Rightarrow\quad \Delta (au+bv)=0\ \textrm{ in }\Omega\textrm{ and } au+bv=ag+bh \textrm{ on }\partial\Omega$$ hence $$\psi(ag+bh)=w(x)=au(x)+bv(x)=a\psi_x(g)+b\psi_x(h).$$ So $\psi_x$ is a linear mapping. To prove that the mapping is continuous (bounded), we invoke the maximum principle, Theorem 2. Since $$|\psi_x(g)|=|u(x)|\le\max_{x\in\Omega}|u(x)|\le\max_{s\in\partial\Omega}|g(s)|=\| g\|_\infty,$$ we have that $$\|\psi_x\|=\max_{g\in C(\partial\Omega), \| g\|=1}\|\psi_x\|\le 1.$$ So $\psi_x$ is a bounded linear functional on $C(\partial\Omega)$, with norms uniformly bounded by $1$. Since if $g\equiv 1$ in $\partial\Omega$ then $u\equiv 1$, it follows that $\|\psi_x\|\ge u(x)=1$, so each linear functional in fact has norm $\|\psi_x\|=1$. Finally, by the Riesz representation theorem, Theorem 3, we conclude that for each $x\in\Omega$ there exists a unique regular Borel measure $\omega^x$ on $\partial\Omega$ such that $\psi_x(g)=\int_{\partial\Omega} g(s)\,d\omega^x(s)$, i.e.: $$u(x) = \int_{\partial\Omega} g(s)\,d\omega^x(s),\textrm{ where }u\textrm{ is the solution to (D) with boundary data }g.$$

Remark It is worth noting that the existence of the harmonic measure follows quite readily from theorem 1,2, and 3. Thus, if instead of the Laplacian operator $\Delta$ we consider another differential operator, for example and elliptic operator in divergence form $L_du=-\textrm{div}A(x)\textrm{grad}(u)$, where $A(x)$ is an $n\times n$ real matrix function satisfying ellipticity conditions $$\| A(x)\|_\infty\le\Lambda,\qquad \xi\cdot A(x)\xi\ge\lambda |\xi|^2,\quad\textrm{ for all }x\in\Omega,\, \xi\in\mathbb{R}^n,$$ for some constants $0<\lambda\le\Lambda<\infty$, then Theorems 1 and 2 still hold (see [GT]), and since Theorem 3 is just a general result independent of our operators, we can conclude the existence and uniqueness of the $L_d$-harmonic measure $d\omega^x_{L_d}$ on $\partial\Omega$.

The Dirichlet problem on the unit ball

Since the problem as we stated so far is too gneral and complex to solve with elementary methods, let’s fix areally nice domain in $\mathbb{R}^n$, let $B$ denote the unit ball in $\mathbb{R}^n$ $$ B = \{ x\in\mathbb{R}^n:|x|<1\},\qquad\partial B=\{ x\in\mathbb{R}^n:|x|=1\}.$$ Instead of trying to find specific solutions to the Dirichlet problem in $B$, first we will see what kinds of solutions we can just build with elementary functions. Suppose we choose to look for solutions on the set of homogeneous polynomials $$\mathcal{P}=\bigcup_{d=0}^\infty\mathcal{P}_d,\qquad\text{where }\mathcal{P}_d=\{p(x):p\text{ is a homogeneous polynomial of degree }d\}.$$ That is, $\mathcal{P}_0$ consists only of constant functions in $\mathbb{R}^n$, $\mathcal{P}_1$ consists of homogeneous polynomials of degree 1, i.e. $p(x)=a_1x_1+a_2x_2+\dots+a_nx_n$, and, in general,

$$p\in\mathcal{P}_d\quad\iff\quad p(x)=\sum_{|\alpha|=d}a_\alpha x^\alpha$$

where $\alpha$ is a multi-index $\alpha=(\alpha_1,\alpha_2,\dots,\alpha_n)$ with $\alpha_i\in\mathbb{N}\bigcup\{ 0\}$ for all $i=1,\dots ,n$, and $|\alpha|=\sum_{i=1}^n\alpha_i$. We would like to find the subset $\mathcal{H}\subset\mathcal{P}$ of homogeneous polynomials whih are also harmonic, i.e.:

$$\mathcal{H}=\{ p\in\mathcal{P}:\Delta p = 0\}.$$

Since the Laplacian operator $\Delta$ involves taking two derivatives, it is clear that all constant polynomials and all polynomials of degree 1 are harmonic, i.e. $\mathcal{P}_0\bigcup\mathcal{P}_1\subset\mathcal{H}$. Now, notice that since the set of monomials of degree $d$, $V_{n,d}=\{ x^\alpha:|\alpha|=d \}$ is a basis of the vector space $\mathcal{P}_d$, the dimension $N(n,d)$ of $\mathcal{P}_d$ is given by the cardinality of the set

$$N(n,d)=\text{dim}(\mathcal{P}_d)=\#\{\alpha:|\alpha|=d\}. $$

We can prove (see appendix below) that

$$N(n,d)=\text{dim}\left(\mathcal{P}_d\right)={{d+n-1}\choose{n-1}}=\frac{(d+n-1)!}{d!(n-1)!}.$$

and that

$$\text{dim}\mathcal{H}_d=N(n,d)-N(n,d-2)={{d+n-1}\choose{n-1}}-{{d+n-3}\choose{n-1}},$$

where $\mathcal{H}_d=\mathcal{H}\bigcap\mathcal{P}_d$.

The Dirichlet problem in the plane.

We will first consider the case $n=2$, and work on the Dricihlet problem in the unit disk $B$ in the plane. From our previous considerations we have a big family of harmonic functions in the plane given by homogeneous polynomials in the variables $(x,y)$. We have that for each degree $d=1,2,3,\dots$ the dimension of the vector space of harmonic polynomilas which are homgeneous of degree $d$ is $$\text{dim}\mathcal{H}_2=N(2,d)-N(2,d-2)={{d+1}\choose{1}}-{{d-1}\choose{1}}=2.$$ Also, for $d=0$ it is clear that the corresponding dimension is $1$ since all constant functions are harmonic. The first interesting case is when $d=2$, it is easy to check that $v_1(x,y)=xy$ and $v_2(x,y)=x^2-y^2$ constitute a basis for $\mathcal{H}_2$, and $$\left\{ x(x^2-3y^2),y(y^2-3x^2)\right\}\qquad\left\{ xy(x^2-y^2),x^4+y^4-6x^2y^2\right\}$$ are basis for $\mathcal{H}_3$ and $\mathcal{H}_4$, respectively. We can parametrize the boundary of the unit ball as $x(t)=(\cos(t),\sin(t)),\quad 0\le t<2\pi$. As an example, since $v_1(x,y)=xy$ is harmonic we have that this is the solution to the Dirchlet problem $$\begin{equation} \left\{ \begin{array}{ll} \Delta u(x)=0 & \text{in }\Omega \\ u\left( x(t)\right) =\cos(t)\sin(t) \quad & \text{on }\partial \Omega% \end{array}% \right. \end{equation} $$

The space $\mathcal{H}$ on the boundary

There is a simple way of finding all the harmonic homogenous polynomials in the plane by using elementary complex analysis. Indeed, the functions $f_d(z)=z^d$ are analytic in the whole complex plave $\mathbb{C}$ for each nonnegative integer $d$. In particular, we have that the real and imaginary part of $z^d$ are harmonic functions in the plane. In fact they are harmonic homogenous polynomials of degree $d$. Explicitly, since $$z^d=(x+iy)^d=\sum_{k=0}^d{{d}\choose{k}}i^kx^{d-k}y^k,$$ we have $$\Re({z^d})= \begin{equation} \left\{ \begin{array}{ll} \sum_{k=0}^{d/2}{{d}\choose{2k}}(-1)^kx^{d-2k}y^{2k} & \text{if }d\text{ is even} \\ &\\ \sum_{k=0}^{(d-1)/2}{{d}\choose{2k}}(-1)^kx^{d-2k}y^{2k} \quad & \text{if }d\text{ is odd}% \end{array}% \right. \end{equation} $$ and $$ \Im({z^d})= \begin{equation} \left\{ \begin{array}{ll} \sum_{k=0}^{d/2-1}{{d}\choose{2k+1}}(-1)^kx^{d-2k-1}y^{2k+1} & \text{if }d\text{ is even} \\ &\\ \sum_{k=0}^{(d-1)/2}{{d}\choose{2k+1}}(-1)^kx^{d-2k-1}y^{2k+1} \quad & \text{if }d\text{ is odd}% \end{array}% \right. \end{equation} $$

For example, for the first 8 dimensions we have $$\begin{array}{cll} \mbox{Dimension }d & \mbox{Basis vector 1 } v_{d,1}(x,y) & \mbox{Basis vector 2 } v_{d,2}(x,y) \\ 0 & 1 & \\ 1 & x & y \\ 2 & x^2-y^2 & 2xy \\ 3 & x^3-3x y^2 & 3x^2y-y^3\\ 4 & x^4-6x^2y^2+y^4& 4x^3y-4xy^3\\ 5 & x^5-10x^3y^2+5xy^3& 5x^4y-10x^2y^3+y^5 \\ 6 & x^6-15x^4y^2+15x^2y^4-y^6& 6x^5y-20x^3y^3+6xy^5\\ 7 & x^7-21x^5y^2+35x^3y^4+7xy^6& 7x^6y-35x^4y^3+21x^2y^5-y^7 \end{array} $$ This actiually has a simpler expression when we look at the boundary of the unit disk $B$. Indeed, since the two colums above are given by $\Re(z^d)$ and $\Im(z^d)$, using polar coordinates we have that $z=re^{it}$ so at the unit circle we have that $z^d=e^{idt}=\cos(dt)+i\sin(dt)$. Thus, on the boundary of $B$ we have that $$\begin{array}{cll} \mbox{Dimension }d & \mbox{Basis vector 1 at the boundary} & \mbox{Basis vector 2 at the boundary} \\ 0 & 1 & \\ 1 & \cos(t) & \sin(t) \\ 2 & \cos(2t) & \sin(2t) \\ 3 & \cos(3t) & \sin(3t) \\ 4 & \cos(4t) & \sin(4t) \\ 5 & \cos(5t) & \sin(5t) \\ 6 & \cos(6t) & \sin(6t) \\ 7 & \cos(7t) & \sin(7t) \end{array} $$ By linearity of the Laplacian operator $\Delta$, any linear combination of solutions to the Dirichlet problem is also a solution with boundary values given by the linear combination of the boundary values ofthe indiviual problems. Suppose that we have a boundary function given by a Fourier series $$g(t)=a_0+\sum_{k=1}^\infty a_k\cos(kt)+b_k\sin(kt),\qquad (FS)$$ where the series converges absolutely and uniformly on $t$, i.e.: $$\sum_{k=1}^\infty |a_k|+|b_k|=A<\infty,$$ theb we would hope that the solution to the Dirichlet problem with boundary value $g$ is given by $$u(x,y)=a_0+\sum_{k=1}^\infty a_kv_{k,1}(x,y)+b_kv_{k,2}(x,y),\qquad (U)$$ with a much clear form in polar coordinates $$u(r\cos(t),r\sin(t))=a_0+\sum_{k=1}^\infty r^k\left( a_k\cos(kt)+b_k\sin(kt)\right),\qquad (Up)$$

Thus, given a continuous function $g$ on $\partial B$, to solve the Dirichlet problem we just have to write $g$ as a Fourier series. The coefficients $a_k,b_k$ are given by $$a_0=\frac{1}{2\pi}\int_0^{2\pi}g(s)\, ds,\qquad a_k=\frac{1}{\pi}\int_0^{2\pi}g(s)\,\cos(ks)\, ds,\qquad b_k=\frac{1}{\pi}\int_0^{2\pi}g(s)\,\sin(ks)\, ds .$$ It would be convenient if we could assure that the Fourier series (FS) actually converges to $g(x)$ at every point for any continuous function $g$. However, this is not true, P. du Bois proved in 1873 that there exist continuous functions for which their corresponding Fourier series diverge at least at one point. However, a pointwise convergence criterion due to Dini assures that is the function $g$ is sligtly better than continuous, for example if $g$ is $\alpha$-Holder continuous for some index $0<\alpha\le 1$, then its Fourier series converges at all points. Luckily enough for us, to determine a measure on a compact set such as the boundary of the unit disk it suffices to test the measure on smooth(er) functions (for example, $C^1$ functions), since these are dense on the set of continuous functions.

In any case, due to Dini’s criterion for pointwise convergence of Fourier series, we can assure that if $g\in C^1(\partial B)$ then the Fourier series (FS) converges at every point $s\in\partial B$ to $g(s)$. We can then expect that the solution to the Dirichlet problem (D) with boundary data $g$ is given by (U)-(Up). Hence, giving $(x,y)\in B$ we let $t=t(x,y)\in [ 0,2\pi)$ determined by $r=r(x,y)=\sqrt{x^2+y^2}<1$, $\cos t=x/r$, $\sin t = y/r$, then $$\begin{eqnarray*} u(x,y) & = & a_0+\sum_{k=1}^\infty r^k\left( a_k\cos(kt)+b_k\sin(kt)\right)\\ & = & \frac{1}{2\pi}\int_0^{2\pi}g(s)\, ds+\frac{1}{\pi}\sum_{k=1}^\infty r^k\int_0^{2\pi}g(s)\,\left( \cos(ks)\cos(kt)+\sin(ks)\sin(kt)\right)\, ds \\ & = & \frac{1}{2\pi}\int_0^{2\pi}g(s)\, ds+\frac{1}{\pi}\int_0^{2\pi}g(s)\left(\sum_{k=1}^\infty r^k\,\cos(k(s-t))\right)\, ds\\ & = & \frac{1}{\pi}\int_0^{2\pi}g(s)\left(\frac{1}{2}+\sum_{k=1}^\infty r^k\, \cos(k(s-t))\right)\, ds \end{eqnarray*}.$$ Where we have been cavalier with matters of convergence while switching the integral sign and the series summation. these steps can be properly justified but we will not do that here. If all of these computations are indeed justifies, we would obtain $$\omega^{(x,y)}(s)=\frac{1}{2\pi}+\frac{1}{\pi}\sum_{k=1}^\infty r^k\, \cos(k(s-t)),\qquad\textrm{ where }r=|(x,y)|\textrm{ and } t=\arg(x,y).$$ To find a more explicit formula for the harmonic measure, we use complex notation $z=(x,y)=re^{it}$ and the fact tha $|z|<1$ so we can apply the geometric series summation formula to obtain $$ \begin{eqnarray*} \sum_{k=1}^{\infty }r^{k}\cos \left( kt\right) &=&\mathrm{Re}% \sum_{k=1}^{\infty }\left( re^{it}\right) ^{k}=\mathrm{Re}\sum_{k=1}^{\infty }z^{k} \\ &=&\mathrm{Re}\left( \frac{1}{1-z}-1\right) =\mathrm{Re}\left( \frac{z}{1-z}% \right) \\ &=&\frac{\mathrm{Re}\left( z\left( 1-\overline{z}\right) \right) }{\left\vert 1-z\right\vert ^{2}}=\frac{\mathrm{Re}\left( re^{it}\left( 1-re^{-it}\right) \right) }{\left\vert 1-re^{it}\right\vert ^{2}} \\ &=&\frac{\mathrm{Re}\left( \left( re^{it}-r^{2}\right) \right) }{\left( 1-re^{it}\right) \left( 1-re^{-it}\right) } \\ &=&\frac{r\cos t-r^{2}}{1-2r\cos t+r^{2}}. \end{eqnarray*}% $$ Hence, $$ \begin{eqnarray*} \omega^{(x,y)}(s)& = &\frac{1}{2\pi }+\frac{1}{\pi }\sum_{k=1}^{\infty }r^{k}\cos \left( k\left( s-t\right) \right) \\ &=&\frac{1}{2\pi }+\frac{1}{\pi }\frac{r\cos \left( s-t\right) -r^{2}}{% 1-2r\cos \left( s-t\right) +r^{2}} \\ &=&\frac{1}{\pi }\frac{r\cos \left( s-t\right) -r^{2}+\frac{1}{2}\left( 1-2r\cos \left( s-t\right) +r^{2}\right) }{1-2r\cos \left( s-t\right) +r^{2}} \\ &=&\frac{1}{2\pi }\frac{1-r^{2}}{1-2r\cos \left( s-t\right) +r^{2}}. \end{eqnarray*} $$ We have obtain the so-called Poisson kernel, $\omega^{(x,y)}(s)=\frac{1}{2\pi }\frac{1-r^{2}}{1-2r\cos \left( s-t\right) +r^{2}}.$

One interesting consequence of the explicit formula for the density function of the harmonic measure is that, since the maximum and minimum of the denominator occur when $\cos(s-t)=\pm 1$, respectively, we have that $$\frac{1-r}{1+r}\le\omega^{(x,y)}(s)\le\frac{1+r}{1-r}\qquad\textrm{uniformly in }s.$$ In paricular, for two arbitrary points $(x_1,y_1),\,(x_2,y_2)\in B$ such that we have $$\frac{(1-r)^2}{(1+r)^2}\le\frac{\omega^{(x_1,y_1)}(s)}{\omega^{(x_2,y_2)}(s)}\le\frac{(1+r)^2}{(1-r)^2},\qquad\textrm{ whenever }|(x_i,y_i)|\le r<1,\ i=1,2.$$ This says that the harmonic measures at differnt points are all abosutely continuous with respect to each other, with uniform constants in compact subsets of $B$.

Appendix

Computing the dimension of $V_{n,d}$

One way of computing this number is by splitting the basis set $V_{n,d}$ into the disjoint union of the monomials which are degree $j$ in the first valiable $x_1$ for $j=0,1,\dots,d$, i.e. $$V_{n,d} = \bigcup_{j=0}^d\{ x^\alpha\in V_d:\alpha_1=j\}.$$ Now we note that each one of the disjoint sets in the union on the right has exactly $N(n-1,d-j)$ elements. This yields the relation $$N(n,d)=\sum_{j=0}^d N(n-1,j).$$ It is also clear that $N(n,0)=1$, $N(1,d)=1$, and $N(n,1)=n$ for all $n,d$. Then $$N(2,d)=\sum_{j=0}^d N(1,j)=d+1,$$ and $$N(3,d)=\sum_{j=0}^d N(2,j)=\sum_{j=0}^d(j+1)=\frac{(d+1)(d+2)}{2}.$$ By induction on $n$ and $d$, we will prove that $N(n,d)=\frac{(d+1)(d+2)\cdots(d+n-1)}{(n-1)!} = {{d+n-1}\choose{n-1}}$. Note that we already know this for $1\le n\le 3$. Suppose this is true for some $n\ge 1$, then by the recursion formula and the identity ${{j+n-1}\choose{n-1}}+{{j+n-1}\choose{n}}={{j+n}\choose{n}}$ valid for $j\ge 1$, we have $$N(n+1,d)=\sum_{j=0}^d N(n,j)=\sum_{j=0}^d{{j+n-1}\choose{n-1}}=1+\sum_{j=1}^d{{j+n}\choose{n}}-{{j+n-1}\choose{n}}.$$ $$= \sum_{j=0}^d{{j+n}\choose{n}}-\sum_{j=0}^{d-1}{{j+n}\choose{n}}={{d+n}\choose{n}}.$$ This proves the inductive step, and we have established that dimension of the vector space $\mathcal{P}_d$ of polynomials in $n$ valiables which are homogenous of degree $d$ is ${{d+n-1}\choose{n-1}}$ for all $n\ge 1$, $d\ge 0$.

Computing the dimension of $\mathcal{H}_d$

Now we would like to know the dimension of the subspaces $\mathcal{H}_d\subset\mathcal{P}_d$ of harmonic polynomials. First, note that the differential operators $\partial_i=\frac{\partial}{\partial x_i}$ define surjective linear transformations from $\mathcal{P}_d$ onto $\mathcal{P}_{d-1}$. Indeed, it suffices to note that for every element of the basis $p_\alpha(x)\in V_{n,d-1}$, the polynomial $x_ip_\alpha(x)$ satisfies $\partial_i(x_ip_\alpha(x))=c p_\alpha(x)$ where $c\ge 1$ is a constant.

We claim that if $d\ge2$ then $\mathcal{P}_{d}=\mathcal{H}_{d}\oplus \left\vert x\right\vert ^{2}\mathcal{P}_{d-2}$ in the sense that if $p\left( x\right) \in \mathcal{P}% _{d}$, then there exist unique polynomials $p_{1}\in \mathcal{H}_{d}$ (a harmonic polynomial homeogenous of degree $d$) and $p_{2}\in \mathcal{P}% _{d-2}$ such that \begin{equation*} p\left( x\right) =p_{1}\left( x\right) +\left\vert x\right\vert ^{2}p_{2}\left( x\right) . \end{equation*}

We claim that $\mathcal{P}_{d}=\mathcal{H}_{d}\oplus \left\vert x\right\vert ^{2}\mathcal{P}_{d-2}$ in the sense that if $p\left( x\right) \in \mathcal{P}% _{d}$, then there exist unique polynomials $p_{1}\in \mathcal{H}_{d}$ (a harmonic polynomial homeogenous of degree $d$) and $p_{2}\in \mathcal{P}% _{d-2}$ such that \begin{equation*} p\left( x\right) =p_{1}\left( x\right) +\left\vert x\right\vert ^{2}p_{2}\left( x\right) . \end{equation*}

To prove this, given a polynomial $p\left( x\right) $ in $\mathcal{P}_{d}$, it can be written of the form $p\left( x\right) =\sum_{\left\vert \alpha \right\vert =d}a_{\alpha }x^{\alpha }$. We associate to each polynomial a dual object, the differential operator% \begin{equation*} p\left( \frac{\partial }{\partial x}\right) =\sum_{\left\vert \alpha \right\vert =d}a_{\alpha }\frac{\partial ^{\left\vert \alpha \right\vert }}{% \partial x^{\alpha }}. \end{equation*} We can now define a positive inner product in $\mathcal{P}_{d}$ by% \begin{equation*} \left\langle p,q\right\rangle =p\left( \frac{\partial }{\partial x}\right) q\left( x\right) . \end{equation*} Notice that if $\left\vert \alpha \right\vert =\left\vert \beta \right\vert $ then $\frac{\partial ^{\left\vert \alpha \right\vert }}{\partial x^{\alpha }}% x^{\beta }=0$ if $\alpha \neq \beta $ and $\frac{\partial ^{\left\vert \alpha \right\vert }}{\partial x^{\alpha }}x^{\alpha }=$ $\alpha !=\alpha _{1}!\alpha _{2}!\cdots \alpha _{n}!$. Now, if $q\left( x\right) =\sum_{\left\vert \beta \right\vert =d}b_{\beta }x^{\beta }$, then \begin{eqnarray*} \left\langle p,q\right\rangle &=&\sum_{\left\vert \alpha \right\vert =d}\sum_{\left\vert \beta \right\vert =d}a_{\alpha }b_{\beta }\frac{\partial ^{\left\vert \alpha \right\vert }}{\partial x^{\alpha }}x^{\beta }=\sum_{\left\vert \alpha \right\vert =d}\alpha !a_{\alpha }b_{\alpha } \\ &=&\sum_{\left\vert \alpha \right\vert =d}\sum_{\left\vert \beta \right\vert =d}a_{\alpha }b_{\beta }\frac{\partial ^{\left\vert \beta \right\vert }}{ \partial x^{\beta }}x^{\alpha }=\left\langle q,p\right\rangle \end{eqnarray*} And therefore \begin{equation*} \left\langle p,p\right\rangle =\sum_{\left\vert \alpha \right\vert =d}a_{\alpha }^{2}\alpha !>0. \end{equation*}

Now, let $\left\vert x\right\vert ^{2}\mathcal{P}_{d-2}$ denote the subspace of $\mathcal{P}_{d}$ of all polynomials of the form $\left\vert x\right\vert ^{2}p_{2}\left( x\right) $ where $p_{2}\left( x\right) \in \mathcal{P}_{d-2}$ . Let $W$ denote the orthogonal complement of $\left\vert x\right\vert ^{2}% \mathcal{P}_{d-2}$ in $\mathcal{P}_{d}$ with respect to the inner product. That is, \begin{equation*} p_{1}\in W\quad \iff \quad \left\langle p_{2},p_{1}\right\rangle =0\quad \text{for all }p_{2}\in \left\vert x\right\vert ^{2}\mathcal{P}_{d-2}. \end{equation*} Writing $p_{2}\left( x\right) =\left\vert x\right\vert ^{2}p_{0}\left( x\right) $, with $p_{0}\left( x\right) \in \mathcal{P}_{d-2}$, this means \begin{equation*} \langle p_2,p_1\rangle =p_{0}\left( \frac{\partial }{\partial x}\right) \Delta p_{1}\left( x\right) =\left\langle p_{0},\Delta p_{1}\right\rangle =0\qquad \text{for all }% p_{0}\in \mathcal{P}_{d-2}. \end{equation*} Since $\Delta p_{1}\in \mathcal{P}_{d-2}$, this implies that $\Delta p_{1}=0$ because the inner product is positive. That is, we have that $p_{1}\in W\iff \Delta p_{1}=0$. That is, we have shown that $W=\mathcal{H}_{d}$. This proves our assertion. In particular, we have that \begin{equation*} \mathcal{P}_{d}=\mathcal{H}_{d}\oplus \left\vert x\right\vert ^{2}\mathcal{P} _{d-2} \end{equation*} hence we have that \begin{eqnarray*} \dim \mathcal{H}_{d} &=&\dim \mathcal{P}_{d}-\dim \mathcal{P}_{d-2} \\ &=&\left( \begin{array}{c} d+n-1 \\ n-1% \end{array} \right) -\left( \begin{array}{c} d+n-3 \\ n-1% \end{array}% \right) . \end{eqnarray*}