Construct the kernel estimate of f(x), for each {\mathcal{x}}\in\Re, by using the U(−1, 1) kernel; i.e., by taking
K(x)={\frac{1}{2}},\quad{\mathrm{for-1}}\leq x\leq1,\mathrm{~and~}0,\ \mathrm{otherwise.}
Here, it is convenient to use the indicator notation, namely, K(x)\,=\,I_{[-1,1]}(x) (where, it is recalled, I_{A}(x)=1\;\mathrm{if}\;x\in A,\;\mathrm{and}\;0\;\mathrm{if}\;x\in A^{c}). Then the estimate (31) becomes as follows:
\hat{f}_{n}(x)=\frac{1}{n h_{n}}\sum\limits_{i=1}^{n}K\left(\frac{x-X_{i}}{h_{n}}\right).\qquad\qquad\qquad\qquad\qquad(31)
\hat{f}_{n}(x)=\frac{1}{n h_{n}}\sum\limits_{i=1}^{n}I_{[-1,1]}\left(\frac{x-X_{i}}{h_{n}}\right),\quad x\in R.\qquad\qquad\qquad(32)
So, I_{[-1,1]}(\frac{x-X_{i}}{h_{n}})\,=\,1, if and only if x-h_{n}\,\leq\,X_{i}\ \leq\ x+h_{n}; in other words, in forming {\hat{f}}_{n}(x), we use only those observations X_{i} which lie in the window [x-h_{n},\,x+h_{n}]. The breadth of this window is, clearly, determined by h_{n}, and this is the reason that h_{n} is referred to as the bandwidth.
Usually, the minimum of assumptions required of the kernel K and the bandwidth h_{n} in order for us to be able to establish some desirable properties of the estimate {\hat{f}}_{n}(x) given in (31), are the following:
REMARK 4 Observe that requirements (33) are met for the kernel used in (32). Furthermore, the convergences in (34) are satisfied if one takes, e.g., h_{n}\,=\,n^{-\alpha}\,\mathrm{with}\,0\,\lt \,\alpha\,\lt \,1/2. Below, we record three (asymptotic) results regarding the estimate {\hat{f}}_{n}(x) given in (31).