with profits. But one of things that's often missed in these conversations is the need to retrain these models regularly or risk them aging into irrelevance, particularly in rapidly evolving environments like the news.
Imagine if you exposed a child to everything the world has to offer. For 18 years they absorb all the knowledge they can, but on the first day of their adult life they're locked away in a cave and isolated from the world. Now imagine you provided that person with art supplies and asked them to draw, paint, and render images based on your prompts.
Most AI training today is done on GPUs each with a relatively small amount of fast memory onboard. Nvidia's A100 and H100 GPUs both sport 80GB of HBM memory, while AMD and Intel's GPUs are now pushing 128GB. While there are other architectures out there with different memory topologies, we're going to stick to Nvidia's A100 because the hardware is well supported, widely available in both on-prem and cloud environments and has been running AI workloads for years at this point.
Thank you for putting this article together! 👍 I enjoyed it. I notice The Register accounts on Mastodon are all bots (Twitter reposts). Perhaps, you can change this. I would have rather complimented your article there. 😁
As IBM pointed out in the 70's: If you put the computer in charge of the decisions, what are you going to do if the decision was catastrophically wrong? You have nobody to blame. AI cannot infer. ChatGPT produces hilariously wrong results as soon as you step outside its training
It's RACISM.... You think AI is immune? If racism is systemic, and AI is a system... Can't fool me... ;-)