You can implement everything, as long you are able to pass information to the shader.
The trick, in this cases, is to perform a multiple pass rendering. The final shader will take a certain number of samplers, that are the non-blurred sources, which are used to compute blurred values.
For example, using multiple textures is possible to emulate effects based on the accumulation buffer.
to implement a gaussian blur, render the scene onto a frambuffer object, with attached a texture on the color attachment. This is the first pass.
As second pass, render a textured quad, where the texture is the one generated in the first step. Textures coordinates are passed from vertex stage to fragment stage, interpolated across the quad. Indeed you have texture coordinate for each fragment; apply a offset for each coordinate to fetch textels around the underlying one, and perform the gaussian blur.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…