Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  
Pages: [1] 2

Author Topic: Using GPU power instead of CPU power?  (Read 10548 times)

Grennba

  • Escaped Lunatic
    • View Profile
Using GPU power instead of CPU power?
« on: May 10, 2011, 06:18:42 pm »

So I recently spent a fair amount of time attempting to brute force a list of MD5 hashes to recover some passwords I had lost. ;)

This equated to a whole lot of computations and required tons of CPU time.

Then I found a nifty little tool (Hashcat or Cuda) that would harness the power of my GPU instead of my CPU to perform the brute force attacks. The GPU brute forcing was at least four times faster than my CPU brute forcing.

I'm not going to pretend like I know enough about the hardware to understand what I'm talking about, but it seems like there is some potential for DF to harness the power of GPU's to perform some of its calculations?

So, what are the odds of DF harnessing the power of GPU's? I have a feeling that this would require massively parallelizing multiple threads, and from what I was able to glean from searching the forums, Mr T doesn't care much for multithreading?

- - S
Logged

kaenneth

  • Bay Watcher
  • Catching fish
    • View Profile
    • Terrible Web Site
Re: Using GPU power instead of CPU power?
« Reply #1 on: May 10, 2011, 08:00:29 pm »

I was thinking it would be great for the fluid (water/magma) simulation, however...

1) The code would likely be GPU specific, incompatible between ATI/NVidia/Intel, multiplying the work.

2) There would have to be a fallback that works the same for people with non-compatible GPU's using the CPU anyway.

3) GPU makers I understand have basically crippled the non-video output capability (or at least, do not optimize for it) so getting a lot of data OUT of a video card, via anything other than the video port, is not effective unless you buy the special computation versions of the cards. (I work on a product that used GPU processing in the last version, but the GPU processing performance dropped a LOT with newer, more powerful video cards....)

It sucks because games could use those features for the physics models, but now they are mostly used for non-game altering physics (like sparks, gibs, ragdoll bodies, etc. that the player can walk though without effect.)
Logged
Quote from: Karnewarrior
Jeeze. Any time I want to be sigged I may as well just post in this thread.
Quote from: Darvi
That is an application of trigonometry that never occurred to me.
Quote from: PTTG??
I'm getting cake.
Don't tell anyone that you can see their shadows. If they hear you telling anyone, if you let them know that you know of them, they will get you.

aepurniet

  • Bay Watcher
    • View Profile
Re: Using GPU power instead of CPU power?
« Reply #2 on: May 10, 2011, 10:04:28 pm »

This would essentially be trivial once DF has implemented one feature.  There are enough open gpu computing libraries where the work would not increase. However since stuff done on the gpu is essentially parralel to whatever happens on the cpu, df would need to run in a multithreaded mode. Currently that's the goal for 2025.
Logged

Niseg

  • Bay Watcher
    • View Profile
Re: Using GPU power instead of CPU power?
« Reply #3 on: May 11, 2011, 05:37:53 am »

When trying to solve a complex computer science problem faster you generally have 2 approaches:
1. getting a more powerful computing engine and adapting your code to work with it.
2. simplifying the problem

The approach you took is approach 1. Many people think this approach is the best course of action but in my opinion it's not. There is a limit to how fast you can make the program run using multi-core and multi-threading and in the age of mobile computing pushing the CPU to the edge is not the answer.

The best approach is to simplify the problem  and the most common approach is divide and conquer .  If you replace the problem with a many  smaller problem you'll generally find a more efficient solution.

I had this problem with my path finding simulator. The original A* I used did 3 linear searches  (O(N)), 1 to find best node, and two to check if the node is in the open or closed set . I replaced the "push" (O(1)) to "insert in order" (O(log N) ) which converted the "search for minimum" from O(N) to O(1). This made it run about 1000 times faster. with 128X128 map max(N)=16384.those 3 linear searchs take about 3*(N(N+1)/2) that means max complexity is 268.5 million iterations. This is reduced to 16384 insert in order which is much less than NlogN = 0.229 million iterations.

Now here is the problem. If you have all those optimizations in and I'm guessing Toady got them all. The problem is still too complicated when you have an max(N) = ~1 million. You go through max(N) every time 19*1million is still a lot . For this reason I generated an overlaying graph of "rooms"  which takes about max(N) iterations . It generates room , how they are connected and the access point (I pick the first - not optimal).  This reduces the  "path doesn't exist " penalty from  the number of nodes to the number of rooms. Instead of looking at the "big world"  the pathfinding  creature only looks at the road they picked . Most of the paths also become  trivial where A* performs best case almost every time.  This is also less memory intensive should have very good cache performance. Unfortunately it needs more work  ::) because I didn't even start room updates yet .

I admit pathfinding isn't the only thing that's slowing DF down but I can work on 1 suggestion at a time ;). I think that performance issues are a pressing concern but  I'm hoping that showing a working alternative would make Toady's life easier when he finds the time to work on them.
Logged
Projects:Path finding simulator(thread) -A*,weighted A*(traffic zones), user set waypoints (path caching), automatic waypoint room navigation,no-cache room navigation.

Grennba

  • Escaped Lunatic
    • View Profile
Re: Using GPU power instead of CPU power?
« Reply #4 on: May 11, 2011, 07:09:34 am »

I did a little more research. It looks like the newer cards are attempting to make it easier to use the GPU's power for computationally intensive projects. See link below.

http://www.nvidia.com/object/cuda_gpus.html

I guess the only question is then... Would taking the time to multithread DF yield high enough performance results to make it worth the time and effort?

From a little more research it looks like the answer is... Only if you can multithread your process to have hundreds of parallel threads. If you are only going to go from one thread, to five threads... then multi-core CPU's are good enough for you. But if you can go from one thread to a hundred or a thousand threads, then the GPU's will be able to process them all in parallel, significantly increasing performance.



This would essentially be trivial once DF has implemented one feature.  There are enough open gpu computing libraries where the work would not increase. However since stuff done on the gpu is essentially parralel to whatever happens on the cpu, df would need to run in a multithreaded mode. Currently that's the goal for 2025.

But word on the street is, multithreading isn't high on the todo list. And I don't blame Toady really. Reworking your code to handle massive multithreading isn't very fun, especially if it will require a BIIG reorganization of the code. I'd probably spend time creating new and exciting content too.
Logged

Neowulf

  • Bay Watcher
    • View Profile
Re: Using GPU power instead of CPU power?
« Reply #5 on: May 11, 2011, 10:36:40 am »

Would be nice if he could just spin off the pathfinding into a separate DLL. Then sufficiently motivated 3rd parties could create their own gpgpu libraries to cover different GPUs.
Haven't done any real programming since before multi-cores became available, but I'd assume that the dll could implement it's own multi-threading internally and just let the main program wait like it already does when pathing.

Meh, I should probably read up more on this.
Logged
Re: Using GPU power instead of CPU power?
« Reply #6 on: May 11, 2011, 02:10:10 pm »

OpenGL does use your GPU, and all the graphics card's onboard processing functions. I am shocked to hear that your GPU is much faster than your CPU, you must have an old old computer with a new new graphics card.
Logged

Chunes

  • Bay Watcher
    • View Profile
Re: Using GPU power instead of CPU power?
« Reply #7 on: May 11, 2011, 03:05:06 pm »

Indeed, as Niseg has already said, this is a poor solution. A lot like using the Titanic to cross a river.

There are so many optimizations possible in DF that would decrease the complexity far more than a bunch of unnecessary hardware could speed up DF.
Logged

Chunes

  • Bay Watcher
    • View Profile
Re: Using GPU power instead of CPU power?
« Reply #8 on: May 11, 2011, 03:06:38 pm »

Oops, double post.
Logged

Niyazov

  • Bay Watcher
  • shovel them under and let me work - I am the grass
    • View Profile
Re: Using GPU power instead of CPU power?
« Reply #9 on: May 11, 2011, 04:00:27 pm »

OpenGL does use your GPU, and all the graphics card's onboard processing functions. I am shocked to hear that your GPU is much faster than your CPU, you must have an old old computer with a new new graphics card.

You're comparing apples and oranges here. Modern GPUs are designed to be able to execute a large number of similar calculations in parallel at high speeds, but they aren't able to efficiently handle as many different kinds of calcuations as a CPU can. Brute-force cryptographic cracking is the kind of repetitive, parallelized task that GPU architectures can perform very, very well. The Cell architecture is also great for parallelized calculations, which is one reason that it's used in both the Playstation 3 and in dedicated MD5-breaking clusters.
Logged

Draco18s

  • Bay Watcher
    • View Profile
Logged

kaenneth

  • Bay Watcher
  • Catching fish
    • View Profile
    • Terrible Web Site
Re: Using GPU power instead of CPU power?
« Reply #11 on: May 11, 2011, 06:54:05 pm »

IF it were used, it would be much better for doing fluid flows (water/magma/filth/global weather...) than pathfinding.
Logged
Quote from: Karnewarrior
Jeeze. Any time I want to be sigged I may as well just post in this thread.
Quote from: Darvi
That is an application of trigonometry that never occurred to me.
Quote from: PTTG??
I'm getting cake.
Don't tell anyone that you can see their shadows. If they hear you telling anyone, if you let them know that you know of them, they will get you.
Re: Using GPU power instead of CPU power?
« Reply #12 on: May 11, 2011, 08:28:53 pm »

You're comparing apples and oranges here. Modern GPUs are designed to be able to execute a large number of similar calculations in parallel at high speeds, but they aren't able to efficiently handle as many different kinds of calcuations as a CPU can. Brute-force cryptographic cracking is the kind of repetitive, parallelized task that GPU architectures can perform very, very well. The Cell architecture is also great for parallelized calculations, which is one reason that it's used in both the Playstation 3 and in dedicated MD5-breaking clusters.
My graphics card was $70. It has a single processor core, so it is only able to process one thread at a time. Graphics adapters with many cores and a general purpose programming API are uncommon, and the APIs vary wildly from vendor to vendor, however, dual-core and quad-core PCs are the norm. Any PC over $400 or $500 is likely to have a four-core processor but is unlikely to have a 128-core graphics adapter. All Intel and AMD processors play nice with multithreading and most of the newer ones are 64-bit. It would be absolutely absurd to program DF to work with geforce GPGPU cards when you can make it work on everyone's multi-core PCs which are able to use 64 bit registers that go nicely with their huge sticks of inexpensive RAM.
Logged

tolkafox

  • Bay Watcher
  • Capitalism, ho!
    • View Profile
    • Phantasm
Re: Using GPU power instead of CPU power?
« Reply #13 on: May 11, 2011, 09:42:01 pm »

Please use the search and reply to an older thread

Please stop posing as footkerchief, no matter how convincing you are :P
>.>
<.<
Can any of you tell me the definition of a thread?
Can all of you please wikipedia the term 'memory wall'?
Are any of you familiar with Amdahl's law?

I think the solution is clear: Please donate more money to Toady one the Great so that he can go back to college and learn new computer mumbo-jumbo. Expect less updates.
Logged
It was a miracle of rare device, A sunny pleasure-dome with caves of ice!

Draco18s

  • Bay Watcher
    • View Profile
Re: Using GPU power instead of CPU power?
« Reply #14 on: May 11, 2011, 09:47:50 pm »

>.>
<.<
Can any of you tell me the definition of a thread?
Can all of you please wikipedia the term 'memory wall'?
Are any of you familiar with Amdahl's law?

1) a forum thread is any given set of posts under the same heading arranged in chronological order.
2) RAM access speed isn't increasing as quickly as the CPU's ability to process the data
3) As for this, I was not aware of the law in the sense that it's a law, but as common sense, I was.
Logged
Pages: [1] 2