Academic ArticlesHuman-centirc artificial intelligence: Frontier research and building industry capability

Human-centirc artificial intelligence: Frontier research and building industry capability

First Published:
27th March 2023
Last Modified:
13th April 2023
DOI
https://doi.org/10.56367/OAG-038-10774

Chin-Teng Lin, Distinguished Professor from the University of Technology Sydney, Human-centric AI Centre, AAII, explores human-centric AI, focusing on frontier research & building industry capability

Distinguished Professor Chin-Teng Lin is a pioneer, inventor and world leader in computational intelligence and co-founder of the GrapheneX-UTS Human-centric AI Centre (HAI) at the University of Technology Sydney, Australia. For the past three decades, he has significantly advanced artificial intelligence (AI), brain-computer interfaces and human-AI teaming across theory, methodology and applications. Here, he provides an overview of his frontier research in combining human intelligence with AI to enable humans to make better decisions in complex, stressful situations and what it takes to translate research into tangible products and services. He also highlights the importance of ethical considerations when teaming humans with robots.

Industrial, social and government sectors have seen rapid changes through automation and digitalisation. Autonomous systems and machine learning are now further accelerating change. Many applications are now being used in the digital space as part of normal life, such as ChatGPT or AI art generators.

How about AI-driven autonomous systems in the real world that are supposed to be in control; during critical and rapidly changing situations, where reliability, accuracy and speed are paramount to avert danger, injury and loss of life, e.g. travelling in an autonomous car?

For the foreseeable future, the humans must continue to be in control to ensure reliable and safe AI.

Human-centric AI needs boundaries

Teaming human intelligence with AI has been demonstrated to produce better performance. In human-centric AI teaming, specific tasks are usually assigned to either humans or AI. (1) This can work well where clear boundaries in the interaction between humans and AI exist, e.g. where the area of cooperation is decided in advance, but not where the boundary is fuzzy, or the situation is developing and changing.

HAI aims to enable true human and AI teaming, expanding and complementing their individual intelligence to maximise adaptiveness and spontaneous cooperation. HAI translates its research of Deep AI into demonstrable and deployable solutions (3D) through industry-academic cooperation (IAC) for Australian and global industries. HAI calls this the 3D IAC model.

HAI’s focus is progressing towards Industry 5.0 (2), which will create human-centred solutions using AI and automation. It will bring together international leaders in AI, postdoctoral researchers and PhD students in its Sydney location. HAI focuses on three research programs:

Trustworthy human-autonomy teaming (tHAT)

tHAT is increasingly gaining attention as the future of human-centric intelligent systems. Humans are masters in judging situations subjectively. Teaming this ability with the purely objective analysis of machines results in better decision-making, especially in dynamic, uncertain and ambiguous environments, such as human-robot teaming in dangerous and unstructured spaces and autonomous driving.

tHAT requires collaborative human-AI agent decisions to exercise control under various uncertainties and levels of mutual trust. It will be the kernel for future applications, adaptively fusing trust-based information to form harmonic and concerted efforts.

Spontaneous autonomous swarm intelligence (sASI)

In nature, swarm intelligence enables animals and insects to interact with one another and the environment, decentralised and self-organised. sASI technology aims to mimic this by equipping multiple autonomous agents, such as robots and drones, with the intelligence of spontaneous teaming, swarming and cooperative tactics, and distributed AI/human- swarm teaming.

Several challenges remain in pursuing sASI, including synergism of artificial and natural intelligence, local-vs-global information representation, autonomous subtask allocation, and cooperative-vs-competitive tactics.

Promising applications for swarm- intelligence technologies include traffic control, freight, agriculture, environmental protection and defence.

Natural brain-computer interface (nBCI)

A critical knowledge gap in brain- computer interfaces (BCI) is an intuitive link between humans and AI to convey human intention and AI decision logic.

Current BCIs enable the brain to interact directly with a computer or machine but require non-invasive electroencephalogram-based technology. They cannot read the brain directly and require input via stimuli like flickers, flashing photos or trained thoughts.

Electrocorticography (ECoG) signals, sensed by invasive intracranial measurements, have shown some ability to ‘read’ speech and ‘see’ the object of focus in the mind’s eye but are unsuitable due to the invasiveness of ECoGs.

HAI aims to develop a foundational new technique for wearable computers and devices. A hands-free, non-invasive nBCI that creates a direct, natural and intuitive link between the brain and machines, replacing current interfaces, such as keyboards, touchscreens and hand-gesture recognition.

Ethical considerations for human-centric AI

Teaming humans and AI require ethical considerations that must be addressed by the collective intelligence of psychology, social science, data science, law and government regulation.

From an engineering aspect, a potential solution is the development of interpretable or expressible AI techniques. (3) These can make AI decision-reasoning processes transparent to humans and potentially mitigate ethical concerns about humans not knowing how AI understands humans, makes decisions and establishes trust with users.

The role of industry & governments

Systematically and efficiently releasing university research, such as human- centric AI into real-world applications requires close collaboration with industry and governments to advance industries, such as manufacturing, healthcare, defence and aerospace.

HAI is industry-funded, with its foundation partner, GrapheneX, committing $10 million to establish a research centre to translate frontier research into commercial products and services. Such large-scale funding of academic research centres is uncommon in Australia.

Governments are key drivers of strong R&D capabilities and innovation infrastructure, providing strategic investment to enable prosperity. HAI aligns with the Australian Government’s Australia 2030 innovation roadmap (4) and the NSW Government’s Future Economy Fund. (5) This fund targets end-to-end stages of business growth through initiatives, such as the 20-Year R&D Roadmap and the Infrastructure Build Out Program (Build Out) (6) that, amongst others, specifically target research and innovation in AI and Robotics.

HAI is actively engaged in Build Out, which supports building critical and shared R&D and innovation infrastructure hubs. One such hub is Sydney’s Tech Central, located next door to UTS.

Human-centric AI could change the way we live

AI applications have become part of daily life. They enable automation and significant efficiency improvements in a wide range of fields and industries. However, in many situations where reliability, accuracy, safety and speed are paramount, AI alone is insufficient to undertake tasks, especially in ambiguous and evolving situations.

Human-centric AI will create applications and products that are safe and effective, and that could change the way humans work and live. This will require careful ethical considerations.

New strategies in human-centric AI, such as HAI Centre’s 3D IAC model and the NSW Infrastructure Build Out Program will create new industries that will drive the development of Australia’s sovereign capabilities. Increasing investment in emerging technologies will provide substantial economy-wide returns.

References

  1. J. S. Metcalfe et al., “Systemic oversimplification limits the
    potential for human-AI partnership”, IEEE Access, 9, pp. 70242-70260, 2021.
  2. https://www.uts.edu.au/news/education/preparing-students-fifth-industrial-revolution
  3. Y. C. Chang, Y. Shi, A. Dostovalova, Z. Cao, J. Kim, D. Gibbons, and C. T. Lin, “Interpretable Fuzzy Logic Control for Multi-Robot Coordination in a Cluttered Environment,” IEEE Transactions on Fuzzy Systems, Vol. 29, Issue 12, pp. 3676-3685, December 2021.
  4. https://www.industry.gov.au/publications/australia-2030- prosperity-through-innovation
  5. https://www.investment.nsw.gov.au/resources/media- releases/future-economy-fund-to-drive-businesses-and- industries-of-tomorrow/
  6. https://www.chiefscientist.nsw.gov.au/funding/research- and-development/innovation-research-acceleration- program/infrastructure-build-out-program
Please Note: This is a Commercial Profile

Contributor Details

Primary Contributor
Journal Details
CITE This Article
Creative Commons License

Reader Comments

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Similar Academic Articles

Academic articles from a similar field of interest