Wiener–Hopf factorisations as operator factorisations
$\def\sopmatrix#1{ \begin{pmatrix}#1\end{pmatrix} } \def\map[#1]{\mapengine #1,\void.} \def\mapenginesep_#1#2,#3.{\mapfunction{#2}\ifx\void#3\else#1\mapengine #3.\fi } \def\mapsep_#1[#2]{\mapenginesep_{#1}#2,\void.} \def\vcbr[#1]{\pr(#1)} \def\qqand{\qquad\hbox{and}\qquad} \def\qqfor{\qquad\hbox{for}\qquad} \def\qqas{\qquad\hbox{as}\qquad} \def\half{ {1 \over 2} } \def\D{ {\rm d} } \def\I{ {\rm i} } \def\E{ {\rm e} } \def\C{ {\mathbb C} } \def\R{ {\mathbb R} } \def\H{ {\mathbb H} } \def\Z{ {\mathbb Z} } \def\T{ {\cal T} } \def\CC{ {\cal C} } \def\FF{ {\cal F} } \def\HH{ {\cal H} } \def\LL{ {\cal L} } \def\vc#1{ {\mathbf #1} } \def\bbC{ {\mathbb C} } \def\norm#1{\left\| #1 \right\|} \def\pr(#1){\left({#1}\right)} \def\br[#1]{\left[{#1}\right]}$
The Wiener–Hopf (WH) method is an important technique for solving problems in applied mathematics, with the most celebrated example being acoustic scattering on a half-line with Sommerfeld Radiation. Its usage in applications is an active research area, including a recent Isaac Newton Institute workshop. It is often used in a discrete context of UL factorisations of Toeplitz operators and in the continuous context of solving integral equations. A nice reference that discusses both the continuous and discrete application of the WH method and their numerical solution is Arabadzhyan and Engibaryan 1987 (h/t Anastasia Kisil). The purpose of this blog is to present this interesting topic in a more hands-on way, alongside numerical demonstrations and an application to a simple time-stepping problem for a discrete heat equation.
Toeplitz operators
A Toeplitz operator is an ∞-dimensional matrix that is constant on the diagonals:
\[ A = \sopmatrix{ a_0 & a_{-1} & a_{-2} & \cdots \cr a_1 & a_0 & a_{-1} & \ddots \cr a_2 & a_1 & a_0 & \ddots \cr \vdots & \ddots & \ddots & \ddots } \]
In other words:
\[ \vc e_k^\top A \vc v = \sum_{j=0}^\infty a_{k-j} v_j \]
This is a semi-infinite discrete convolution operator with kernel $a_k$: here we think of $k$ as "physical space". Note that the following discussion is also valid in the block case, where $a_k$ are square $d \times d$ matrices. As a (scalar) example, we consider the semi-infinite discrete Laplacian, which is a simple Toeplitz operator:
\[ \Delta = \sopmatrix{-2 & 1 \\ 1 & -2 & 1 \\ & \ddots & \ddots & \ddots} \]
We associate the operator with a symbol $\hat a$:
\[ A = T[\hat a(z)] = T\br[\sum_{j=-\infty}^\infty a_j z^j]. \]
That is $\hat a(z)$ is a Laurent expansion whose coefficients are given by the kernel $a_k$. For our discrete Lapacian example we have $\Delta = T[z - 2 + z^{-1}]$.
Laurent expansions are Fourier expansions in disguise under the change-of-variables $z = \E^{\I \theta}$, and thus we can think of $\hat a(z)$ as an infinite version of the inverse discrete Fourier transform of the vector of Laurent coefficients $a_k$. Since $k$ is thought of as physical space, we think of $z$ or $\theta$ as "spectral space" and always assume $z$ is on the complex unit circle.
The UL factorisation
How can one invert a Toeplitz operator? For example, how can we solve the semi-infinite equation
\[ (I - h \Delta) \vc v = \vc f \]
where $\vc v$ and $\vc f$ are infinite vectors? This corresponds to a backward Euler time discretisation of the heat equation, that is if we want to solve
\[ \vc v_t = \Delta \vc v, \qquad \vc v(0) = \vc v_0 \]
and discretise in time via
\[ \vc v_t ≈ {\vc v^{s+1} - \vc v^s \over h} \]
we obtain
\[ (I - h \Delta) \vc v^{s+1} = \vc v^s. \]
Of course we can truncate $A := I - h \Delta$ and get an $n \times n$ tridiagonal matrix, but this truncation breaks down as $t \rightarrow \infty$: clearly heat will propogate off to infinity, and by truncating we are imposing artificial Dirichlet conditions. So the motivating question: can we invert this operator exactly?
It turns out the answer is yes, by first finding the UL decomposition of a Toeplitz operator $A$. We demonstrate this in code. First we need to create a representation of the discrete Laplacian. We can do so as follows:
using InfiniteLinearAlgebra, MatrixFactorizations, LinearAlgebra, ApproxFun, FillArrays, Plots Δ = SymTridiagonal(Fill(-2,∞), Fill(1,∞)) h = 0.01 A = I - h*Δ
∞×∞ SymTridiagonal{Float64,FillArrays.Fill{Float64,1,Tuple{InfiniteArrays.O neToInf{Int64}}}} with indices OneToInf()×OneToInf(): 1.02 -0.01 ⋅ ⋅ ⋅ … -0.01 1.02 -0.01 ⋅ ⋅ ⋅ -0.01 1.02 -0.01 ⋅ ⋅ ⋅ -0.01 1.02 -0.01 ⋅ ⋅ ⋅ -0.01 1.02 ⋅ ⋅ ⋅ ⋅ -0.01 … ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋮ ⋱
Here Fill(x,n)
is defined in FillArrays.jl: it is a lazy version of fill(x,n)
, that is an n
-lengthed vector whose entries are all equal to x
; in this case n = ∞
which means the vector is unbounded. We can then compute the UL decomposition by calling
U,L = ul(A)
MatrixFactorizations.UL{Float64,Tridiagonal{Float64,FillArrays.Fill{Float64 ,1,Tuple{InfiniteArrays.OneToInf{Int64}}}},InfiniteArrays.OneToInf{Int64}} U factor: ∞×∞ Bidiagonal{Float64,FillArrays.Fill{Float64,1,Tuple{InfiniteArrays.OneTo Inf{Int64}}}} with indices OneToInf()×OneToInf(): 1.0 -0.00980486 ⋅ … ⋅ 1.0 -0.00980486 ⋅ ⋅ 1.0 ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ … ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋮ ⋱ L factor: ∞×∞ Bidiagonal{Float64,FillArrays.Fill{Float64,1,Tuple{InfiniteArrays.OneTo Inf{Int64}}}} with indices OneToInf()×OneToInf(): 1.0199 ⋅ ⋅ ⋅ … -0.01 1.0199 ⋅ ⋅ ⋅ -0.01 1.0199 ⋅ ⋅ ⋅ -0.01 1.0199 ⋅ ⋅ ⋅ -0.01 ⋅ ⋅ ⋅ ⋅ … ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋮ ⋱
A UL decomposition reduces inversion of A
to backward and forward substitution. In fact, if the right-hand side has a finite number of entries, lets say only the first n
entries are (possibly) nonzero, then U
can be inverted in $O(n)$ operations.
w = U \ [1; 2; 3; zeros(∞)]
∞-element LazyArrays.CachedArray{Float64,1,Array{Float64,1},FillArrays.Zero s{Float64,1,Tuple{InfiniteArrays.OneToInf{Int64}}}} with indices OneToInf() : 1.0198981342227236 2.029414592216455 3.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ⋮
Inverting L
on the other hand propogates information down, but this can be done adaptively:
L \ w
∞-element LazyArrays.CachedArray{Float64,1,Array{Float64,1},FillArrays.Zero s{Float64,1,Tuple{InfiniteArrays.OneToInf{Int64}}}} with indices OneToInf() : 0.9999962573494935 1.9996182496483235 2.9610652067795056 0.02903284186125082 0.00028466306807784007 2.7910826888648877e-6 2.73661863784559e-8 2.683217376139295e-10 2.630858164908133e-12 2.5795206700034706e-14 ⋮
We thus can implement an efficient, adaptive heat equation solver:
v_0 = [1; 2; 3; zeros(∞)] # initial condition N = 50_000 # number of time steps v = Array{typeof(v_0)}(undef,N) v[1] = v_0 for s = 2:N v[s] = L \ (U \ v[s-1]) end # plot the first 1000 entries slice of a few time steps # the .+ 1E-250 is to avoid errors due to identical zeros s = 10000; plot(v[s][1:1000] .+ 1E-250; label="t=$((s-1)*h)", yscale=:log10) s = 25000; plot!(v[s][1:1000] .+ 1E-250; label="t=$((s-1)*h)") s = N; plot!(v[s][1:1000] .+ 1E-250; label="t=$((s-1)*h)")
This is by no means the only adaptive algorithm, one can also use an adaptive version of QR to compute the same thing also in $O(n)$ operator, with no knowledge of the Toeplitz structure (it only uses bandedness):
qr(A) \ [1; zeros(∞)]
∞-element LazyArrays.CachedArray{Float64,1,Array{Float64,1},FillArrays.Zero s{Float64,1,Tuple{InfiniteArrays.OneToInf{Int64}}}} with indices OneToInf() : 0.9804864072151698 0.009613535947337123 9.425941321638455e-5 9.242007341074306e-7 9.061662573306174e-9 8.884836979897142e-11 8.711461889111829e-13 8.541469969247136e-15 8.374795202483393e-17 8.211372859245783e-19 ⋮
However, the UL decomposition has the benefit that the finite-entries of the solution are computed exactly in bounded time: computing U \ v[s-1]
is a finite computation as v[s-1]
has only a finite-number of entries, and then inverting L
gives the exact entries of the inverse, and the only question is when to stop, which we do in this case once we underflow and hence all further entries will be numerically zero, even if we were to try to calculate them. In other words, this is doing infinite linear algebra without ever truncating, under floating point arithmetic.
This also answers why we want the UL decomposition, and not the LU decomposition: using the LU decomposition to invert A
first propogates information down when inverting L
, so whereas the UL decomposition can return, say, the first 10 entries of A \ v[s-1]
in fixed computational time, the LU decomposition would require first computing all entries of L \ v[s-1]
before doing back substitution in inversion of U
. Though the UL decomposition is also nice since both U
and L
are Toeplitz, which is not the case for the LU decomposition.
Finally, in the UL decomposition we know inv(L)
in closed form as L
is Toeplitz, so a future improvement would be to lazily store the tail of u
in a structured format, hence achieving bounded computational cost. (Can we also do time-stepping with this representation??)
Wiener–Hopf on the unit circle
How can we determine the UL decomposition? First assume that the winding number of $\hat a(z)$ is trivial, that is, its image over the unit circle does not surround the origin. For example, the matrix from the heat equation example just barely satisfies this condition. Then the Wiener–Hopf factorisation of $\hat a(z)$ is a decomposition
\[ \hat a(z) = \phi_-(z) \phi_+(z) \]
where $\phi_+$ is analytic inside the unit disk (that is, it has a convergent Taylor series) and $\phi_-$ is analytic outside the unit disk (that is, $\phi_-(z^{-1})$ has a convergenct Taylor series), satisfying $\phi_-(\infty) = 1$. A simple argument on matching terms on Taylor series verifies the following:
\[ T[\hat a] = T[\phi_- \phi_+] = \underbrace{T[\phi_-]}_U \underbrace{T[\phi_+]}_L \]
Thus computing the Wiener–Hopf factorisation tells us the UL decomposition! Moreover, while in general $T[\hat a]^{-1} \neq T[\hat a^{-1}]$, it is true for triangular Toeplitz operators, thus we have
\[ A^{-1} = L^{-1} U^{-1} = T[\phi_-^{-1}] T[\phi_+^{-1}] \]
hence we can invert $A$ in closed form.
How do we determine a WH factorisation? In the scalar case we can do this analytically. Define the Cauchy transform over the unit circle
\[ \CC f(z) = {1 \over 2 \pi \I} \oint {f(\zeta) \over \zeta - z} \D \zeta \]
which satisfies Plemelj's theorem: for $z$ on the unit circle we have
\[ \CC^+ f(z) - \CC^- f(z) = f(z) \]
where $\CC^+ f(z)$ is the limit of Cauchy transform from inside the disk, and $\CC^- f(z)$ is the limit from outside.
This leads to the solution
\[ \phi_\pm(z) = \E^{\pm \CC[\log \hat a](z)}. \]
Note that if we know the Laurent series of $f(z)$
\[ f(z) = \sum_{k = -\infty}^\infty f_k z^k \]
then
\[ \CC^+ f(z) = \sum_{k = 0}^\infty f_k z^k \qqand \CC^- f(z) = -\sum_{k=-\infty}^{-1} f_k z^k \]
Hence we can implement this using ApproxFun.jl. Here is the heat equation example from before:
Δ̂ = Fun(z -> 1/z - 2 + z, Laurent(Circle())) â = 1 - h*Δ̂ Cp = Fun(Taylor(), log.(â).coefficients[1:2:end]) exp.(Cp).coefficients[1:2:end] # diagonals of L
2-element Array{Complex{Float64},1}: 1.0199019513592784 + 7.358469605405271e-20im -0.010000000000000016 + 6.3016258927046475e-19im
These entries match precisely the diagonals of L
found above.
However, this technique does not translate to the block case, as multiplying exponentials is not so simple:
\[ \E^{\CC_+[\log \hat a](z)} \E^{-\CC_-[\log \hat a](z)} \neq \E^{\CC_+[\log \hat a](z)-\CC_-[\log \hat a](z)} = \hat a(z) \]
To generalise to the block case, we can use a Riemann-–Hilbert (RH) problem formulation. An RH problem consists of finding $\Phi_\pm(z)$ such that
\[ \Phi_+(z) = \Phi_-(z) \hat a(z), \]
where $\Phi_+(z)$ is analytic inside the circle and $\Phi_-(z)$ is analytic outside the circle, satisfying $\Phi_-(\infty) = I$. In this case we have
\[ \Phi_+(z) = \phi_+(z) = I + \CC_+ v(z)\qqand \Phi_-(z) = \phi_-(z)^{-1} = I + \CC_- v(z) \]
for some unknown function $v(z)$. This function satisfies a simple linear singular integral equation
\[ \CC^+ v(z) - \CC^- v(z) \hat a(z) = \hat a(z) - I \]
which can be solved readily using SingularIntegralEquations.jl, see the Wiener–Hopf example.
Unfortunately, both the "exact" scalar solution and the Riemann–Hilbert method technique do not have bounded computational cost. In the scalar solution, we needed to compute a long expansion (OK..only 15 coefficients in this case...but arbitrarily long in other examples) for log(â)
on the way to computing just 2 coefficients. And in the RH problem we need to now resolve $\phi_-(z)^{-1}$, which again may have an arbitrarily large number of coefficients.
A linear algebra approach
Thankfully, there is an alternative approach. First note that any banded symmetric Toeplitz operator is a block tridiagonal Toeplitz operator. For example, we can represent the Toeplitz operator with symbol $a(z) = \epsilon (z^{-2} + z^2) + (z+z^{-1}) + 2$ as
ϵ = 0.1 A = BlockTridiagonal(Fill([ϵ 2; 0 ϵ], ∞), Fill([2.0 1; 1 2], ∞), Fill([ϵ 0; 2 ϵ], ∞))
∞×∞-blocked ∞×∞ BlockArrays.BlockArray{Float64,2,Tridiagonal{Array{Float64, 2},FillArrays.Fill{Array{Float64,2},1,Tuple{InfiniteArrays.OneToInf{Int64}} }},Tuple{BlockArrays.BlockedUnitRange{InfiniteArrays.InfStepRange{Int64,Int 64}},BlockArrays.BlockedUnitRange{InfiniteArrays.InfStepRange{Int64,Int64}} }}: 2.0 1.0 │ 0.1 0.0 │ ⋅ ⋅ │ ⋅ ⋅ │ … 1.0 2.0 │ 2.0 0.1 │ ⋅ ⋅ │ ⋅ ⋅ │ ──────────┼────────────┼────────────┼────────────┼ 0.1 2.0 │ 2.0 1.0 │ 0.1 0.0 │ ⋅ ⋅ │ 0.0 0.1 │ 1.0 2.0 │ 2.0 0.1 │ ⋅ ⋅ │ ──────────┼────────────┼────────────┼────────────┼ ⋅ ⋅ │ 0.1 2.0 │ 2.0 1.0 │ 0.1 0.0 │ ⋅ ⋅ │ 0.0 0.1 │ 1.0 2.0 │ 2.0 0.1 │ ──────────┼────────────┼────────────┼────────────┼ … ⋅ ⋅ │ ⋅ ⋅ │ 0.1 2.0 │ 2.0 1.0 │ ⋅ ⋅ │ ⋅ ⋅ │ 0.0 0.1 │ 1.0 2.0 │ ──────────┼────────────┼────────────┼────────────┼ ⋅ ⋅ │ ⋅ ⋅ │ ⋅ ⋅ │ 0.1 2.0 │ ⋅ ⋅ │ ⋅ ⋅ │ ⋅ ⋅ │ 0.0 0.1 │ ──────────┼────────────┼────────────┼────────────┼ ⋮ ⋮ ⋱
Thus we consider
\[ A = \sopmatrix{a & b \\ c & a & b\\ & c & a & \ddots \\ && \ddots & \ddots} \]
Now the UL decomposition will have the form
\[ A = \underbrace{\sopmatrix{1 & b\ell^{-1} \\ & 1 & b\ell^{-1} \\ && \ddots & \ddots }}_U \underbrace{\sopmatrix{\ell \\ c & \ell \\ & c & \ell \\ && \ddots & \ddots }}_L \]
hence determining the UL decomposition boils down to finding a single matrix $\ell$ that satisfies
\[ a = \ell + b \ell^{-1} c \]
In the scalar case this is a simple quadratic equation, leading to
\[ \ell = (a \pm \sqrt{a^2 - 4bc})/2 \]
We choose the $\pm$ to be equal to the sign of $a$ so that $\ell$ is as large as possible in modulus: this ensures invertibility of $U$ and $L$.
Adapting this to the matrix case is more complicated, but Delvaux & Dette 2012 provide the key: the matrix equation can be recast as a quadratic eigenvalue problem, which in turn can be recast as an eigenvalue equation using a companion matrix. We demonstrate this on the blocked-Toeplitz matrix from above:
ul(A - 10I, Val(false)) # Val(false) to indicate no pivoting
MatrixFactorizations.UL{Float64,BlockArrays.BlockArray{Float64,2,Tridiagona l{Array{Float64,2},FillArrays.Fill{Array{Float64,2},1,Tuple{InfiniteArrays. OneToInf{Int64}}}},Tuple{BlockArrays.BlockedUnitRange{InfiniteArrays.InfSte pRange{Int64,Int64}},BlockArrays.BlockedUnitRange{InfiniteArrays.InfStepRan ge{Int64,Int64}}}},InfiniteArrays.OneToInf{Int64}} U factor: ∞×∞-blocked ∞×∞ BlockArrays.BlockArray{Float64,2,Bidiagonal{Array{Float64,2 },FillArrays.Fill{Array{Float64,2},1,Tuple{InfiniteArrays.OneToInf{Int64}}} },Tuple{BlockArrays.BlockedUnitRange{InfiniteArrays.InfStepRange{Int64,Int6 4}},BlockArrays.BlockedUnitRange{InfiniteArrays.InfStepRange{Int64,Int64}}} }: 1.0 0.0 │ -0.0127256 -0.00174427 │ … 0.0 1.0 │ -0.256257 -0.0484888 │ ──────────┼───────────────────────────┼ ⋅ ⋅ │ 1.0 0.0 │ ⋅ ⋅ │ 0.0 1.0 │ ──────────┼───────────────────────────┼ ⋅ ⋅ │ ⋅ ⋅ │ ⋅ ⋅ │ ⋅ ⋅ │ ──────────┼───────────────────────────┼ … ⋅ ⋅ │ ⋅ ⋅ │ ⋅ ⋅ │ ⋅ ⋅ │ ──────────┼───────────────────────────┼ ⋅ ⋅ │ ⋅ ⋅ │ ⋅ ⋅ │ ⋅ ⋅ │ ──────────┼───────────────────────────┼ ⋮ ⋱ L factor: ∞×∞-blocked ∞×∞ BlockArrays.BlockArray{Float64,2,Bidiagonal{Array{Float64,2 },FillArrays.Fill{Array{Float64,2},1,Tuple{InfiniteArrays.OneToInf{Int64}}} },Tuple{BlockArrays.BlockedUnitRange{InfiniteArrays.InfStepRange{Int64,Int6 4}},BlockArrays.BlockedUnitRange{InfiniteArrays.InfStepRange{Int64,Int64}}} }: -7.99873 1.02563 │ ⋅ ⋅ │ … 1.02563 -7.48264 │ ⋅ ⋅ │ ────────────────────┼──────────────────────┼ 0.1 2.0 │ -7.99873 1.02563 │ 0.0 0.1 │ 1.02563 -7.48264 │ ────────────────────┼──────────────────────┼ ⋅ ⋅ │ 0.1 2.0 │ ⋅ ⋅ │ 0.0 0.1 │ ────────────────────┼──────────────────────┼ … ⋅ ⋅ │ ⋅ ⋅ │ ⋅ ⋅ │ ⋅ ⋅ │ ────────────────────┼──────────────────────┼ ⋅ ⋅ │ ⋅ ⋅ │ ⋅ ⋅ │ ⋅ ⋅ │ ────────────────────┼──────────────────────┼ ⋮ ⋱
Convolution operators on the half-line and Wiener–Hopf
The continuous analogue of a Toeplitz operator is an integral convolution operator over the half-line:
\[ (A u)(x) = \lambda u + \int_0^\infty a(x-y) u(y) \D y \]
for the kernel $a(x)$, where we again think of $x$ as "physical space". Just as in the discrete case, $a$ may be a matrix-valued function, in which case $u$ will be vector-valued. Note that the addition of $\lambda I$ is needed for the operators to be invertible: if the kernel $a(x)$ is smooth and decaying then the integral operator is compact on reasonable function spaces. (Alternatively we could have included a Dirac $\delta$ function in $a$.)
We use the inverse continuous Fourier transform to find the symbol:
\[ \lambda + \hat a(s) = \lambda + \int_0^\infty a(x) \E^{\I s x} \D x \]
where we think of $s$ as "spectral space". Note that this can be viewed as a continuum limit of the discrete case symbol with $z = \E^{\I \theta}$. We therefore use the (formal) notation
\[ A = \T[\lambda + \hat a] \]
The classic example we'll use is
\[ a(x) = \E^{-|x|} \]
so that
\[ \lambda + \hat a(s) = \lambda + {2 \over 1 + s^2} = {\lambda + 2 + \lambda s^2 \over 1 + s^2} \]
A more interesting example would be the Green's function associated with acoustic scattering, defined in terms of Hankel functions, though in that case Wiener-Hopf comes up more elegantly from Fourier transforming the PDE directly.
The continuous UL decomposition
The analogue of a lower/upper triangular matrix are left/right Volterra integral operators
\[ L v(x) = \lambda v + \int_0^x \ell(x-y) v(y) \D y \qqand U v(x) = \mu v + \int_x^\infty u(x-y) v(y) \D y \]
Hence our goal is to write
\[ A = U L \]
where $\mu = 1$ (in analogy to $U$ being 1 on the diagonal in the discrete case). To determine such a factorisation we need to move into symbol space.
We claim that any left Volterra integral operator can be written as
\[ L = \T[\lambda + \phi_+] \]
where $\phi_+$ is analytic in the upper-half plane, in direct comparison to the discrete case where $L = T[\phi_+]$ for $\phi_+(\E^{\I \theta})$ analytic in the upper-half plane (since $\phi_+(z)$ is analytic in the disk). To see this, consider the Fourier transform of $\phi_+$
\[ \ell(x) := {1 \over 2 \pi} \int_{-\infty}^\infty \phi_+(s) \E^{-\I s x} \D s. \]
For $x < 0$, the integrand is analytic in the upper-half plane, so $\ell(x) = 0$ in that case. Therefore
\[ \T[\lambda + \phi_+] = \lambda + \int_0^\infty \ell(x-y) v(y) \D y = \lambda + \int_0^x \ell(x-y) v(y) \D y \]
By similar logic,
\[ U = \T[\mu + \phi_- ] \]
where $\phi_-$ is analytic in the lower-half plane.
Just as in the discrete case, we can invert $U$ and $T$ using the symbol. For example, we have
\[ L^{-1} = \T[(\lambda + \phi_-)^{-1}] \]
Where the discrete case followed by matching terms in the expansion, this follows by combining integrals.
Wiener–Hopf on the real line
We claim that the UL decomposition follows from the Wiener–Hopf factorisation. That is we now find
\[ \lambda + \hat a(s) = (1+\phi_-(s)) (\lambda + \phi_+(s)) \]
where $\phi_+(z)$ ($\phi_-(z)$) are analytic and decaying in the upper (lower) half-planes, and decaying at $\infty$. Such a decomposition can be found by taking logarithms, or in our simple model problem by factorising the numerator and denominator:
\[ \lambda + \hat a(s) = \underbrace{s - s_+ \over s-\I}_{1+\phi_-(s)} \underbrace{ \lambda {s-s_- \over s + \I}}_{\lambda+\phi_+(s)} \]
where $s_\pm = -\lambda^{-1} \pm \I \sqrt{1 - \lambda^{-2}}$ are the roots of the numerator. Note we've just grouped the poles and roots in the same half-plane, so that both $\phi_\pm(z)$ and $\phi_\pm(z)^{-1}$ are analytic in the relevant half-planes. Thus we arrive at
\[ A = \T[\lambda + \hat a] = \T[(1+\phi_-)(\lambda + \phi_+)] = \underbrace{\T[1+\phi_-]}_U \underbrace{\T[\lambda + \phi_+]}_L \]
and hence
\[ A^{-1} = \underbrace{\T[(\lambda + \phi_+)^{-1}]}_{L^{-1}} \underbrace{\T[(1+\phi_-)^{-1}]}_{U^{-1}} \]
Linear algebra approach?
We are left with a question: is there an analogue of the linear algebra approach to the continuous case? Its not clear, partly because there's no immediate notion of "block-tridiagonal" in this setting.
Live Baccarat - Fake Baccarat Cards and Online Casino
ReplyDeleteOur live Baccarat table games 1xbet korean are perfect for playing on the go and you'll see our ì œì™•ì¹´ì§€ë…¸ real money febcasino Baccarat bonus table games.