Post Thumb

Why Machine Learning Needs GPUs

Share it

In general, this was a machine learning conference, and, to most of the attendees, of course it was.

With chipmaker AMD’s announcement this week at CES that the bleeding-edge of its GPU product line will be targeted at machine learning, at least initially, I thought it would be a good opportunity to take a step back and offer a bit of background on why GPUs and machine learning are so intimately connected in the first place.

In the grand scheme of math, the basic ideas behind machine learning are even kind of simple, at least conceptually.

Training a real-life machine learning model might involve doing this same thing millions of times, with each iteration tweaking those weights just a little bit to better optimize the resulting model.

Obviously, that’s a big reduction, but the thing to understand is that what we wind up doing in machine learning is crunching together big matrices of numbers.

We can make machine learning algorithms work faster simply by adding more and more processor cores within a GPU. That tends to be an easier engineering problem than those faced by conventional CPUs where parallelism can help with performance sometimes, but finding and implementing that utility is pretty hard.

That’s why GPUs are so important to machine learning, and, increasingly, vise versa.

read more...

Article originally posted at bit.ly

Post Author: Michael Byrne

2 thoughts on “Why Machine Learning Needs GPUs

    Jason Blain

    (January 23, 2018 - 8:06 pm)

    I wish we could add cores to people. Maybe someday we will.

    Duc Khong

    (January 23, 2018 - 8:06 pm)

    Michael Hoang

Leave a Reply

Your email address will not be published. Required fields are marked *