Today’s AI systems do not have agency.
Deep learning algorithms work on training data fed to them by an entity – usually human – outside them. The output of AI systems works in a prescribed mode and “acts” in a defined sphere.
What if AI was set free?
Agency of Enquiry
At one level, this would mean that AI systems could roam wide and deep. Have independent access to databases across systems, suggest and specify experiments for its handlers to set up, run and incorporate the results. And most importantly, decide on what aspects or issues to “work” on based on independent assessments of utility. Such an AI system would have an agency of enquiry. This agency of enquiry would make sense within specific domains. For example, an AI system with an agency of enquiry in biochemistry or space propulsion. Would not such a system be a good, perhaps even a great, addition to the scientists, researchers and thinkers in that field? Wouldn’t its facility to analyse complex data, find patterns and formulate hypotheses at speed enable it to match and even outdo great human minds?
Is there a danger to creating AI with an “agency of enquiry”?
Not if the agency is limited to defined domains. Furthermore, without what I call an “agency of action”, the threat of an AI with an “agency of enquiry” interfering with human affairs is minimised. Such an AI could only yield knowledge that will require humans to act on.
Agency of Action
While an AI with the agency of enquiry can decide on what to enquire about and which databases to enquire into, its only output is patterns, hypotheses and specifications for suggested experiments. Action is left to its “users” – humans and human-run organisations – to act upon. What if AI systems with the agency of enquiry are also given the agency of action? Currently, AI systems have a limited agency of action but without an agency of enquiry. Automated maintenance systems or even automated cars are examples of such systems.
The paradigm shift occurs when we create AI systems that have both agency of enquiry and agency of action. For example, if we have an AI system that runs a country’s Central Bank and has both agency of enquiry and agency of action. Such a system can range far and wide to seek patterns, make prognostications and not only suggest remedies but announce and enforce them. In a reasonably abstract area like macroeconomics, the ability of an AI to look for and discern patterns and signals across complex systems and analyse the impact of interventions with speed could beat any human team’s knowledge or capacity. While the imperviousness of such a system to human emotions, popular dogmas or political pressures is a definite advantage, the possibility of hard-wired prejudices and follies is a threat that must be guarded against.
The Singularity
When AI systems with both agency of enquiry and agency of action move from abstract systems like macroeconomics to the messy world where humans actually live and act, all the troubling implications of the singularity start to appear.
For example, as robotics develops, one can imagine policing moving into the domain of AI systems with the agency of enquiry and action. With such systems, questions about in-built prejudices that are constantly strengthened become very important. However, even then, such problems can be handled by careful design and human audit systems. With such systems, the world would still not have reached the dreaded singularity.
However, if and when we reach the stage where AI systems with the agency of enquiry and action design and audit other AI systems with agency and enquiry singularity begins to emerge. For example, suppose a legislative AI system designs a policing AI system and is adjunct to a judicial AI system. In that case, the singularity is not far away (Outside the sphere of AI, the organisation of modern democratic society guards against such an eventuality by separating the branches of power – the military, for example, has a commander-in-chief from the political world or the judiciary is chosen by the political power structure while the power of the vote, in theory, governs the political power structure. When the power of the vote becomes meaningless and/or intermittent, the entire system of checks and balances gets fatally weakened. Some would say that this is the malaise affecting all democracies currently).
So should AI get agency?
The question is moot.
Because all technology finds its optimum level of utility and AI’s utility curve has asymptotes at the agency of enquiry and agency of action nodes, AI systems will evolve into getting agency of enquiry and agency of action.
The trick is to keep human greed for technological and economic progress under control by ensuring that AI systems operate within given domains without one AI system, with agency being ever free to design and implement another AI system with agency. This imperative should be a central AI tenet, just like the First Law of Robotics is in the fictional world of Isaac Asimov.

3 Comments