+ 61 1300 132 551
Uh Oh, it’s A.I.

Uh Oh, it’s A.I.

It looks like Steven Hawking finally got around to reading Karel Čapek’s 1922 work Rossum’s Universal Robots, a dystopian play about sentient replicants happy to work for humans at first before they rebel, causing the extinction of peeps (well up there with Huxley and Orwell). More recently, Steve Wozniak, Bill Gates, Elon Musk and Stephen Hawking each have got in on the act of warning humanity about the dangers of Artificial Intelligence (AI) and before them, a CEO of Sun Microsystems, no less, Bill Joy, had an article published in Wired Magazine back in April 2000: “Why the Future doesn’t need us”. Here is the LINK.

Musks fears “a very deep digital superintelligence that … could go into rapid recursive self-improvement in a non-algorithmic way … it could reprogram itself to be smarter and iterate very quickly and do that 24 hours a day on millions of computers”. So will machines turn on humans and take over the world? According to IBM Watson’s CEO, Mike Rhodin “I haven’t seen any technology that could lead to that outcome…what Watson does well is … sort massive amounts of information”. Well he would say that. Perhaps our children will deal with the existential concern; my concern is more social.

About the only thing I remember from my economics degree is that total cost of production is a function of the cost of labour and capital. As capital (technology) becomes cheaper and more capable, it will replace labour. Empirically, humanity is probably engineering itself out of meaningful work. Goodbye the days of the butcher, the baker and the candlestick maker, goodbye car factory workers, goodbye surgeons, lawyers, educators, goodbye. Goodbye jobs for 15 year old school leavers, goodbye thinking professions, goodbye even pulling grogs or coffees for each other, a robot can do that. Here is a picture:

(Give us a Bombay and tonic, thanks mate; I’m thirsty)

(Give us a Bombay and tonic, thanks mate; I’m thirsty)

So what will happen? Will the children of now sit around tomorrow on their backsides enjoying utopia, bored because everything is automated? Will they be exterminated by AI, Čapek’s replicants or Hawking/Musk/Arthur C Clarke’s robots? Will AI will be bored by humans and dissociate or leave? Will AI have emotional intelligence or a conscience? Will AI recognise it will still require humans (as the only species that drinks heavily)? Will AI self-iterate, create indefensible weapons we don’t understand or just turn off the power and confuse supermarket stockists into chaos and starve us? Will IBM, Google and other providers of AI be the only ones, other than investors, able to make any money? Maybe there oughta be a law against it. What part does the rule of law play in this?

Commentators speak of regulating AI as though it is like legislating for climate change: too hard, as though it requires the political will to legislate Asimov’s Three Laws of Robotics and expect full compliance by the engineers charged to see to it lest the future be lost. I am a bit doubtful law will ever adequately deal with augmented decision making, or regulate permitted reasoning in multi-agent systems.

Over time, the law has developed to govern relationships between the State and people, then companies, and now devices. In a piecemeal way, the law already governs aspects of the Internet of Things (IoT) and Machine to Machine (M2M) communications. There are laws, regulations, policies and contracts that govern the information architecture of connected, communicating devices. The law rarely stands still and presently IoT and M2M law is a hodgepodge of issues from data privacy, data protection, the rules of ‘discovery’ (ie: the practical value of obtaining that data for litigation), customised contracts, new kinds of governance policies, but is slowly and surely moving into areas such as interoperability and contracts governing Collaborative Decision making (CDM) and who owns the intellectual property that flows from CDM. A range of sectors, including technology, government, retail, marketing, education, information service providers, social media, financial services, healthcare and not-for- profits have IoT, M2M and CDM issues. Governance in this new architecture is a function of its business context and requires bespoke contractual structuring for each client. It is a short step from the Law of the IoT, M2M and CDM to the contracts and industry practices (which inform the ‘reasonableness’ test that informs common law) and regulations that deal with AI. In this way, it also influences ethics which, to my mind, is the difference between what you can do and what you should do.

This legal landscape is navigable but hardly satisfactory.

Computers are good at pattern recognition and ‘representative knowledge’ (ie language). AI will manifest these behaviours. Epistemologically, the assumption is that all activity, whether by animate or inanimate objects, can be formalised mathematically in the form of predictive rules or laws. Asimov’s Three Laws rely on that assumption; predictive rules are governable, governance of ethics less so.

Will engineers be required to take the Hippocratic oath? Will their ethics be regulated much as regulation governs the professional standards of conduct of legal practitioners?

I suspect we’re stuffed if we rely on the law for a solution. The law tends to keep up like Prince Phillip: a few steps behind, filling awkward spaces with opinions and faux pas.

  • 25 Mar, 2015
  • Posted by Nicholas Weston
  • 1 Tags
  • 0 Comments

CATEGORIES News

COMMENTS