Figuring Out The Innermost Secrets Of Generative AI Has Taken A Valiant Step Forward

  • 📰 ForbesTech
  • ⏱ Reading Time:
  • 107 sec. here
  • 12 min. at publisher
  • 📊 Quality Score:
  • News: 77%
  • Publisher: 59%

Artificial Intelligence AI News

Large Language Models Llms,Generative AI,Anthropic

Dr. Lance B. Eliot is a world-renowned expert on Artificial Intelligence (AI) with over 7.4+ million amassed views of his AI columns. As a CIO/CTO seasoned executive and high-tech entrepreneur, he combines practical industry experience with deep academic research.

In today’s column, I aim to provide an insightful look at a recent AI research study that garnered considerable media attention, suitably so. The study entailed once again a Holy Grail ambition of figuring out how generative AI is able to pull off being so amazingly fluent and conversational.Nobody can right now explain for sure the underlying logical and meaningful basis for generative AI being extraordinarily impressive.

Anyway, sorry about the soapbox speech but I try to deter the rising tide of misleading characterizations whenever I get the chance to do so.I assume you’ve used a generative AI app such as ChatGPT, GPT-4, Gemini, Bard, Claude, or the like. These are also known as large language models due to the aspect that they model natural languages such as English and tend to be very large-scale models that encompass a large swatch of how we use our natural languages. They are all pretty easy to use.

Strictly speaking, perhaps not. It would just seem like a whole bunch of numbers. You would be hard-pressed to say anything other than that a number led to another number, and so on. Explaining how that made a difference in getting a logical or meaningful answer to your prompt would be extraordinarily difficult.

I took you through that indication to highlight that we can at least inspect the flow of numbers. One might argue that a true black box won’t let you see inside. You customarily cannot peer into a presumed black box. In the case of generative AI, it isn’t quite the proper definition of a black box. We can readily see the numbers and watch as they go back and forth.We can watch the numbers as they proceed throughout the input-to-output processing within generative AI.

I believe you are now up-to-speed, and I can get underway with examining the recent study undertaken and posted by Anthropic.I’ll first explore an online posting entitled “Mapping the Mind of a Large Language Model” by Anthropic, posted online on May 21, 2024. There is also an accompanying online paper that I’ll get to afterward and provides deeper details. Both are worth reading.“Today we report a significant advance in understanding the inner workings of AI models.

The idea for this is inspired by the human brain consisting of real neurons biochemically wired together into a complex network within our noggins. I want to loudly clarify that how artificial neural networks work is not at all akin to the true complexities of so-called wetware or the human brain, the real neurons, and the real neural networks.

 

Thank you for your comment. Your comment will be published after being reviewed.
Please try again later.
We have summarized this news so that you can read it quickly. If you are interested in the news, you can read the full text here. Read more:

 /  🏆 318. in TECHNOLOGY

Technology Technology Latest News, Technology Technology Headlines