Utopian and dystopian scenarios of AI do not lead to concrete regulations

  • 📰 trtworld
  • ⏱ Reading Time:
  • 51 sec. here
  • 2 min. at publisher
  • 📊 Quality Score:
  • News: 24%
  • Publisher: 63%

Technology Technology Headlines News

Technology Technology Latest News,Technology Technology Headlines

‘If you portray these models as overly powerful and ascribe some kind of agency to them, you shift the responsibility away from the companies that develop these systems.’ Opinion | Simon Fischer

” on GPT-4 that “this report contains no further details about the architecture , hardware, training compute, dataset construction, training method, or similar”.

This secrecy hinders democratic decisions and thus regulations on the conditions under which LLMs should be developed and deployed. There is no such thing as “the one good AI”; we should therefore not trust a comparatively small and privileged group of people who believe that a “superintelligence” is inevitable – which is not the case – with how to build “safe” AI.

Instead, we need to start by engaging different people, especially the aggrieved, in the conversation to change the narrative and power relations. Disclaimer: The viewpoints expressed by the authors do not necessarily reflect the opinions, viewpoints and editorial policies of TRT World. We welcome all pitches and submissions to TRT World Opinion – please send them via email, to opinion.editorial@trtworld.comSimon Fischer is a PhD candidate at the Donders Institute for Brain, Cognition and Behaviour focusing on the societal implications of AI, in particular on AI-based Decision Support Systems used in healthcare.

 

Thank you for your comment. Your comment will be published after being reviewed.
Please try again later.
We have summarized this news so that you can read it quickly. If you are interested in the news, you can read the full text here. Read more:

 /  🏆 101. in TECHNOLOGY

Technology Technology Latest News, Technology Technology Headlines

Similar News:You can also read news stories similar to this one that we have collected from other news sources.

ChatGPT statements can influence users' moral judgmentsHuman responses to moral dilemmas can be influenced by statements written by the artificial intelligence chatbot ChatGPT, according to a study published in Scientific Reports. The findings indicate that users may underestimate the extent to which their own moral judgments can be influenced by the chatbot. SciReports Statement made by humans can influence user's moral judgment! Moral of story is its to influence moral judgments of anyone. ChatGPT is not doing something new. SciReports Maybe we could teach users the scientiific methods of not just accepting the first result set as being the absolute truth.
Source: physorg_com - 🏆 388. / 55 Read more »