Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
792 views
in Technique[技术] by (71.8m points)

opengl - C# Rendering OpenCL-generated image

Problem: I'm trying to render a dynamic Julia fractal in real time. Because the fractal is constantly changing, I need to be able to render at least 20 frames per second, preferably more. What you need to know about a Julia fractal is that every pixel can be calculated independently, so the task is easy parallelizable.

First approach: Because I'm already used to Monogame in C#, I tried writing a shader in HLSL that would do the job, but the compiler kept complaining because I used up more than the allowable 64 arithmetic slots (I need at least a thousand).

Second approach: Using the CPU, it took, as could be expected, about two minutes to generate one frame.

Third approach: I started learning the basics of OpenCL using a wrapper called Cloo. I actually got a quick, nice result by calculating the image data using OpenCL, then getting the data from the GPU, storing the data in a Texture2D and drawing the texture to the screen. For a 1000x1000 image I get about 13 frames a second. This is still not quite what I had hoped for, as the image should be 1920x1080 to fill up my screen, and the frame rate is pretty noticeable. I realised that I'm actually generating the image on the GPU, sending the data to the CPU and then sending it back to the GPU, so this seems like an unnecessary step that, if could be removed, will probably solve my problem. I read on some fora that OpenGL is able to do this, but I haven't been able to find specific information.

Questions: Firstly, is there a simple way to draw the data generated by OpenCL directly without involving CPU (preferably compatible with Monogame)? If this isn't the case, is it possible to implement it using OpenGL and afterwards combine it with Monogame? Secondly, why isn't this possible with a simple HLSL shader? As HLSL and OpenCL both use the GPU, why is HLSL so much more limited when it comes to doing many arithmetic operations?

Edit

I found this site that does roughly what I want, but using a GLSL shader. This again questions my fait in HLSL. Unfortunately, as monogame doesn't support GLSL (yet), my questions remain unanswered.

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

To cover the questions: Yes, OpenCL can paint, but Monogame apparently doesn't encapsulate over the top of CL, so No to Question 1. Question 2 is the right question: maybe, see suggestions below. Question 3: HLSL is essentially PS 1.1 so "why isn't it possible" is because PS evolved to 2.x to manage parallelization through wider data pipes...so you want Dx12 support or GLSL/OpenGL.

Since you are close to your performance expectations using CLoo, why not try OpenCL.Net and/or OpenTK to bind the Julia calculations more closely to the Monogame API? --If you have to go GPU-CPU-GPU at least make that as wide a pipeline as possible.

Alternately, a slightly sideways solution to your parallelization and framerate problem might be integrating GP-GPU wrappers such as Quanta's Alea with your Monogame solution. I'd suggest looking at Cudafy, but Alea is more robust and cross-vendor GPU supported.

The build process will decide which portion of the Julia code will calculate on GPU via Alea, and the Monogame portions will receive the pixel-field for rendering. The sticking points will be library "play-nice" compatibility, and ultimately, frame-rate if you get it working.

Bottom line: you're stuck, by choice, in HLSL (read: Microsoft Dx9) and Monogame doesn't support GLSL/Dx12....so you will have to maneuver creatively to get un-stuck.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...