iptv techs

IPTV Techs

  • Home
  • World News
  • The Guardian watch on AI’s power, restricts, and hazards: it may need reskinnyking the technology | Editorial

The Guardian watch on AI’s power, restricts, and hazards: it may need reskinnyking the technology | Editorial


The Guardian watch on AI’s power, restricts, and hazards: it may need reskinnyking the technology | Editorial


More than 300 million people employ OpenAI’s ChatGPT each week, a tesdomesticatednt to the technology’s pguide. This month, the company unveiled a “pro mode” for its recent “o1” AI system, giveing human-level reasoning — for 10 times the current $20 monthly subscription fee. One of its progressd behaviours materializes to be self-preservation. In testing, when the system was led to count on it would be shut down, it tryed to disable an oversight mechanism. When “o1” set up memos about its tradement, it tried duplicateing itself and overwriting its core code. Creepy? Absolutely.

More rationalassociate, the shift probably mirrors the system’s programming to selectimise outcomes rather than demonstrating intentions or adviseedness. The idea of creating keen machines causes senseings of unease. In computing this is the gorilla problem: 7m years ago, a now-dismaterializeed primate progressd, with one branch directing to gorillas and one to humans. The trouble is that fair as gorillas lost regulate over their overweighte to humans, humans might leave out regulate to superkeen AI. It is not evident that we can regulate machines that are cleverer than us.

Why have such skinnygs come to pass? AI huges such as OpenAI and Google alertedly face computational restricts: scaling models no extfinisheder secures cleverer AI. With restricted data, hugeger isn’t better. The mend? Human feedback on reasoning. A 2023 paper by OpenAI’s createer chief scientist set up that this method settled 78% of hard maths problems, contrastd with 70% when using a technique where humans don’t help.

OpenAI is using such techniques in its recent “o1” system, which the company skinnyks will settle the current restricts to increaseth. Computer scientist Subbarao Kambhampati telderly the Atlantic that this increasement was akin to an AI system joining a million chess games to lget selectimal strategies. However, a team at Yale which tested the “o1” system unveiled a paper which recommended that making a language model better at reasoning helps – but it does not finishly reshift the effects of its exceptional portray as srecommend a clever foreseeor of words.

If aliens landed and gifted humanity a superkeen AI bdeficiency box, then it would be rational to exercise alert in discleave outing it. But humans portray today’s AI systems. If they do finish up materializeing to be deceptive, it would be the result of a portray flunkure. Relying on a machine whose operations we cannot regulate needs it to be programmed so that it truly aligns with human desires and desirees. But how rational is that?

In many cultures there are stories of humans asking the gods for divine powers. These tales of hubris frequently finish in lament, as desirees are granted too literassociate, directing to unforeseen consequences. Often, a third and final desire is employd to undo the first two. Such a predicament was faced by King Midas, the legfinishary Greek king who desireed for everyskinnyg he touched to turn to gelderly, only to despair when his food, drink and cherishd ones met the same overweighte. The problem for AI is that we want machines that strive to accomplish human objectives but understand that the gentleware does not understand for certain exactly what those objectives are. Cltimely, unverifyed ambition directs to lament. Controlling unforeseeed superkeen AI needs reskinnyking what AI should be.

This directing article was not filed on the days on which NUJ members in the UK were on strike.

Source connect


Leave a Reply

Your email address will not be published. Required fields are marked *

Thank You For The Order

Please check your email we sent the process how you can get your account

Select Your Plan