Why the Value of Uniquely Human Skills Will Increase in the Age of AI

The complementarity between human skills and artificial intelligence...

Illustrate the concept of symbiosis between humans and artificial intelligence, showing a harmonious integration where both human creativity and AI's computational power enhance each other's value. The image should depict a human figure and an AI entity, represented by a robot or digital motifs, working together at a shared workspace. Elements such as interconnected gears, a flow of light or data between them, and a background that combines organic and digital textures can symbolize the mutual enhancement and interdependence. This visual metaphor should convey a future where human intelligence and artificial intelligence collectively achieve greater outcomes, embodying the idea of super intelligence through their partnership.

FuturePoint Digital is a research-based consultancy positioned at the intersection of artificial intelligence and humanity. We employ a human-centric, interdisciplinary, and outcomes-based approach to augmenting human and machine capabilities to create super intelligence. Our evidenced-based white papers can be found on FuturePoint White Papers, while FuturePoint Conversations aims to raise awareness of fast-breaking topics in AI in a less formal format. Follow us at: www.futurepointdigital.com.

This post originally appeared on FuturePoint White Papers; Feb 9, 2024

In an era marked by unprecedented advancements in artificial intelligence (AI), understanding the evolving interplay between technology and human capabilities becomes crucial. The O-Ring Theory, initially proposed in 1993 by University of Chicago economist Michael Kremer, provides a compelling framework for examining why uniquely human skills are gaining in value as AI technologies advance. This theory, originally developed to explain the interdependence of tasks within a production process, draws its name from the Challenger space shuttle disaster, where the failure of a single component (an O-ring) led to the mission's catastrophic failure.

Thanks for reading FuturePoint Conversations....! Subscribe for free to receive new posts and support my work.

Subscribed

Thanks for reading FuturePoint Digital White Papers! Subscribe for free to receive new posts and support my work.

The theory has subsequently been used by researchers to highlight the importance of complementarity and quality matching in tasks where AI and human workers interact. Applied in this context, it suggests that as AI-driven automation becomes more prevalent, the premium on human skills that cannot be automated, or reduced to algorithms—especially those involving complex, nuanced judgments and interpersonal interactions—will likely increase in value (Autor, 2015; Deming, 2017).

More specifically, the uniquely human skills that will likely surge in value encompass a wide range, including but not limited to, creative and innovative thinking, emotional intelligence, ethical judgment, and complex problem-solving. These skills are not easily codified or replicated by algorithms, making them increasingly critical in a technology-driven world. For instance, while AI can analyze data and identify patterns, the human capacity for creative thinking leads to groundbreaking innovations (Hass, 2020; Starkey, 2020). Similarly, emotional intelligence, vital for leadership, teamwork, and customer relations, remains uniquely human. Furthermore, as ethical considerations become more complex with the integration of AI into societal structures, the ability to navigate these moral landscapes will be indispensable (Fernández-Berrocal & Extremera, 2020).

This white paper delves into these dynamics, offering insights into how individuals and organizations can adapt to and thrive in a future where human skills and AI capabilities are deeply intertwined. By fostering and valuing uniquely human skills, we can harness the full potential of this symbiotic relationship, ensuring that the advancement of AI not only augments our capabilities but also enriches the human experience. The future, therefore, is not one of displacement but of enhancement, where AI's evolution propels the worth and demand for our most human qualities to new heights.

With this concept in mind, FuturePoint Digital adopted the slogan, “human intelligence + artificial intelligence = super intelligence™,” to illustrate the ever increasing multiplier effect of human/machine interactions. To learn more, please follow us at: www.futurepointdigital.com.

Envisioning the Future of Human-AI Collaboration

As AI systems become more sophisticated—capable of performing tasks ranging from complex data analysis to autonomous driving—the question of how human workers will fit into this new paradigm becomes increasingly urgent. The fear that machines might replace human labor en masse has sparked widespread debate. However, a closer examination reveals a more nuanced reality. While AI excels in tasks that involve processing vast amounts of information or executing well-defined procedures, there remains a spectrum of skills, distinctly human in nature that AI cannot replicate, making these skills of even greater value (Frank et al., 2019; Susskind & Susskind, 2020; World Economic Forum, 2020).

Specific human skills that are likely to see a surge in value in tandem with the increasing sophistication of AI technologies include (but are not limited to) the following:

  • Creative and Innovative Thinking: Creativity remains an inherently human trait, involving the generation of new ideas, artistic expressions, and innovative solutions that AI cannot authentically replicate. As AI takes over more routine and analytical tasks, creative skills will become more critical, driving innovation in product development, business strategies, and artistic endeavors. The ability to think outside the algorithmic box will be key in identifying opportunities and solving problems that AI cannot foresee (Benedek & Fink, 2019; Du, Bhattacharya, & Sen, 2021; Oakley, Sperry, & Pringle, 2020).

  • Emotional Intelligence and Empathy: Emotional intelligence—the capacity to be aware of, control, and express one's emotions, and to handle interpersonal relationships judiciously and empathetically—stands out as a domain where humans excel. As workplaces become more automated, the value of empathy, emotional support, and the ability to navigate complex social dynamics will increase. These skills are fundamental in leadership, teamwork, customer service, and any role requiring negotiation and conflict resolution (Clarke, 2019; Huang & Rust, 2018).

  • Ethical Judgment and Moral Reasoning: The deployment of AI in various sectors raises complex ethical questions that require a deep understanding of human values, societal norms, and moral principles. Humans' ability to navigate these ethical landscapes, considering the broader implications of technology on society, privacy, equality, and ethical use, will be indispensable. As AI becomes more integrated into decision-making processes, the need for human oversight to ensure decisions align with ethical standards and promote the greater good will only grow (Greene et al., 2019; Mittelstadt et al., 2016; Vallor, 2016).

  • Complex Problem-Solving: While AI can analyze vast datasets to identify patterns and propose solutions based on historical information, it struggles with problems that require an understanding of context, abstract thinking, and the integration of disparate pieces of information. Humans' ability to tackle complex, multifaceted problems by drawing on a broad range of experiences and knowledge will remain a critical asset. This includes navigating uncertainty, making decisions with incomplete information, and innovating in response to new challenges (Griffiths & Tenenbaum, 2019; Bughin & Seong, 2020; Kuncel & Hezlett, 2020).

  • Adaptability and Lifelong Learning: The rapid pace of technological change necessitates a workforce that can adapt, learn new skills, and pivot between roles. Human adaptability—the ability to change oneself in response to altered circumstances—and the commitment to lifelong learning are capabilities that AI cannot emulate. As the nature of work shifts, individuals who can continuously evolve their skill set to meet new demands will be highly valued (Choudhury et al., 2021; Harteis et al., 2020; Schwartz & Schaninger, 2019).

The Small Data vs. Big Data Advantage

The difference in how AI platforms and humans develop capabilities, particularly in creative tasks like painting or decision-making, can also be explained (at least partially) in terms of how humans and AI platforms leverage small data versus big data to their respective advantage. Here's a closer look at why AI requires vast amounts of data and time to develop capabilities that a human can perform more intuitively:

  • AI and Big Data: AI, especially in the context of machine learning and deep learning, relies on large datasets to learn. This is because AI learns patterns and makes decisions based on statistical analysis and probabilities derived from the data it is trained on. The more data available, the better the AI platform can identify patterns and nuances. In tasks like portrait painting, an AI platform needs to analyze thousands or millions of images to understand and replicate styles, facial features, and expressions (Elgammal, Liu, Elhoseiny, & Mazzone, 2017; Jordan & Mitchell, 2015; LeCun, Bengio, & Hinton, 2015)

  • Humans and Small Data: Humans, on the other hand, are adept at learning and generalizing from much smaller datasets. A human artist can study a few portraits and begin creating their own, using intuition, creativity, and a lifetime of diverse experiences that are not strictly related to portrait painting. Humans use abstract thinking, emotional intelligence, and a complex web of cognitive processes that do not solely rely on analyzing vast amounts of data (Goleman, 1995; Kahneman, 2011; Klein, 1998)

Imagine a futuristic scene where a cyborg, a perfect blend of advanced technology and human elements, is delicately painting the Mona Lisa. The cyborg's design incorporates sleek metallic parts that contrast with its human-like features, showcasing a harmonious blend of man and machine. Its focused gaze is directed towards the canvas, where the iconic image of the Mona Lisa is slowly coming to life under its precise brush strokes. The setting is an artist's studio filled with various painting tools, a palette in one of the cyborg's hands, and the soft lighting accentuating the details of both the artwork and the cyborg's intricate design. The atmosphere is one of deep concentration and artistic mastery, bridging the gap between the past and the future.

Continuing with the human artist example to illustrate how humans are, in many ways, distinctly advantaged over their algorithmically driven cousins, let’s take a more in-depth look at what it takes to train an AI platform to paint a unique portrait of a particular person. Let’s call this person Mona Lisa. Starting from scratch, we would first need to select an AI model —Generative Adversarial Networks (GANs) or Variational Autoencoders (VAEs) are popular choices due to their ability to generate high-quality, detailed images.

Now we’ll need to collect a large dataset of portrait images (quality over quantity counts here). These images should be as varied as possible in terms of styles, lighting, backgrounds, and facial features to ensure the AI platform learns a wide range of portrait characteristics. This will be the training set used to develop the general artistic capabilities we’re looking for. Next, we’ll develop a test set of images that the AI platform has not yet “seen” and that will be used to test the platform’s ability to create a unique and specific image —in this case, a unique portrait that captures the essence of Ms. Lisa.

From scratch, the entire process could take months, but with enough data and training the AI platform should be able to paint a unique likeness of Ms. Lisa —perhaps even something on par with Mr. Da Vinci himself. However, after undertaking this rather arduous and time consuming process, the AI platform is limited to only the task for which it has been trained. It cannot, for instance, stop painting and sit down to play a simple game of checkers (without a lot of additional training).

Clearly humans are far more versatile in this sense. One of the reasons for this is that humans can perform complex tasks with relatively small amounts of data, while AI platforms require large amounts of data to process, learn, and perform tasks.

Differences in Learning Processes

Another consideration is that the mechanisms driving AI and human learning are fundamentally distinct, each with its unique strengths and limitations. AI platforms, with their capacity for processing vast datasets, embody the pinnacle of linear and algorithmic learning, optimizing tasks through patterns and probabilities (Goodfellow, Bengio, & Courville, 2016; LeCun, Bengio, & Hinton, 2015). In contrast, human learning embodies a more fluid and adaptable approach, leveraging a rich tapestry of emotional, subconscious, and cognitive faculties that enable a nuanced understanding of the world (Tyng, Amin, Saad, & Malik, 2017; Woollett & Maguire, 2019). This juxtaposition of AI's data-driven learning against human's experiential and intuitive knowledge acquisition highlights a broader conversation about the complementary roles of artificial and human intelligence in advancing collective knowledge and capabilities. Here’s a closer look at the differences:

  • AI Learning: AI platforms learn through algorithms that adjust based on feedback loops—essentially trial and error on a massive scale. This process, while powerful, lacks the intuition and holistic understanding that humans naturally employ. AI's learning is linear and cumulative, requiring extensive time and data to adjust and refine its capabilities (Gebru et al., 2021; Hassabis, Kumaran, Summerfield, & Botvinick, 2017; Zhang & Chen, 2020).

  • Human Learning: Human learning is more dynamic and can be non-linear. Humans can draw on cross-disciplinary knowledge, making leaps of understanding and creativity that AI currently cannot. Human learning is enhanced by emotions, subconscious processing, and cognitive biases that allow for intuitive leaps and innovative thinking (Immordino-Yang, 2020; Lin, Osman, & Ashcroft, 2021; Oakley, Sejnowski, & McConville, 2020).

Differences in Contextual and Abstract Thinking

The dichotomy between AI and human cognitive abilities becomes particularly pronounced when examining their respective capacities for contextual and abstract thinking. While AI systems demonstrate unparalleled efficiency in processing and applying specific information within the confines of their programming, they falter when faced with the need for abstract reasoning or the application of knowledge across diverse contexts (Barrett & Satpute, 2019; Zador, 2019). In stark contrast, human intellect thrives on adaptability and abstraction, effortlessly weaving through various domains of knowledge and applying learned experiences in novel situations (Hassin, 2020; Oakley & Sejnowski, 2019). This fundamental difference underscores the unique strengths of human cognition, highlighting the intrinsic value of our ability to navigate complex, multifaceted scenarios that AI cannot inherently grasp. As we delve deeper into these differences, it becomes clear why fostering both AI's precision and human adaptability is crucial for harnessing the full spectrum of cognitive capabilities. Put succinctly:

  • AI and Specificity: AI's understanding is limited to the data it has been trained on. It struggles with abstract concepts and transferring knowledge between contexts that have not been explicitly programmed or encountered during training.

  • Humans and Adaptability: Humans excel at using abstract and contextual thinking. A human can apply lessons learned from one area of life to an entirely different area, use metaphorical thinking, and understand nuanced social cues without explicit instruction.

Efficiency and Flexibility

In the realm of cognitive performance, the concepts of efficiency and flexibility present a fascinating study of contrasts between AI and human capabilities. AI systems excel in efficiency, processing and executing tasks with a speed and precision that often exceed human abilities, particularly in well-defined domains such as data analysis and pattern recognition. However, this efficiency comes with limitations in versatility and adaptability (Rajkomar, Dean, & Kohane, 2019; Litjens et al., 2017). Conversely, the human brain, while it may not always match the raw processing power of AI in specific tasks, demonstrates remarkable flexibility. This human capacity to seamlessly switch contexts, grasp the nuances of complex problems, and integrate new information without extensive retraining underscores a critical advantage in the unpredictable and ever-changing landscape of real-world challenges (Beaty et al., 2021; Woollett & Maguire, 2019). Exploring these attributes further reveals the complementary nature of AI's efficiency and human flexibility, suggesting that the most effective solutions leverage the strengths of both. Points to remember:

  • AI Efficiency: Once trained, AI can perform specific tasks with incredible speed and accuracy, often surpassing human capabilities in tasks like data analysis, pattern recognition, and executing pre-defined actions.

  • Human Flexibility: Humans may not match AI in processing speed for specific tasks but offer unparalleled flexibility. Humans can quickly pivot between tasks, understand complex multi-layered problems, and adapt to new information without needing to relearn from scratch.

In summary, as AI platforms become more complex and capable, the human skills that complement, enhance, or surpass AI's capabilities will become increasingly important. This interplay between human ingenuity and artificial efficiency presents a roadmap for navigating the future of work, emphasizing the development of skills that AI cannot replicate. By focusing on nurturing these uniquely human abilities, individuals and organizations can prepare to thrive in an AI-augmented world.

How might FuturePoint Digital help your organization explore exciting, emerging concepts in science and technology? Follow us at www.futurepointdigital.com, or contact us via email at [email protected].

About the Author: David Ragland is a former senior technology executive and an adjunct professor of management. He serves as a partner at FuturePoint Digital, a research-based technology consultancy specializing in strategy, advisory, and educational services for global clients. David earned his Doctorate in Business Administration from IE University in Madrid, Spain, and a Master of Science in Information and Telecommunications Systems from Johns Hopkins University, where he was honored with the Edward J. Stegman Award for Academic Excellence. He holds an undergraduate degree in Psychology from James Madison University and completed a certificate in Artificial Intelligence and Business Strategy at MIT. His research focuses on the intersection of emerging technology with organizational and societal dynamics.

References

Autor, D. H. (2015). Why are there still so many jobs? The history and future of workplace automation. Journal of Economic Perspectives, 29(3), 3-30.

Barrett, L. F., & Satpute, A. B. (2019). Large-scale brain networks in affective and social neuroscience: Towards an integrative functional architecture of the brain. Current Opinion in Behavioral Sciences, 28, 100-107.

Beaty, R. E., Kenett, Y. N., Christensen, A. P., Rosenberg, M. D., Benedek, M., Chen, Q., ... & Silvia, P. J. (2021). Robust prediction of individual creative ability from brain functional connectivity. Proceedings of the National Academy of Sciences, 118(10).

Benedek, M., & Fink, A. (2019). Toward a neurocognitive framework of creative cognition: The role of memory, attention, and cognitive control. Current Opinion in Behavioral Sciences, 27, 116-122.

Bughin, J., & Seong, J. (2020). Solving the world's problems with better data. Harvard Business Review, 98(4), 86-95.

Choudhury, S. R., Yee, M. M., & Kumar, S. (2021). Skill shifts: Responding to the changing demands of the future of work. McKinsey Global Institute.

Clarke, N. (2019). Developing emotional intelligence capabilities in the workplace. Journal of Industrial and Commercial Training, 51(1), 2-6.

Deming, D. J. (2017). The growing importance of social skills in the labor market. The Quarterly Journal of Economics, 132(4), 1593-1640.

Du, J., Bhattacharya, A., & Sen, S. (2021). Machine learning approaches for creativity. Journal of Business Research, 134, 503-513.

Elgammal, A., Liu, B., Elhoseiny, M., & Mazzone, M. (2017). CAN: Creative adversarial networks, generating "art" by learning about styles and deviating from style norms. ACM Transactions on Graphics (TOG), 36(4), 1-12.

Fernández-Berrocal, P., & Extremera, N. (2020). Ability emotional intelligence and life satisfaction: Positive psychosocial effects in adolescents. Advances in Social Sciences Research Journal, 7(4), 90-101.

Frank, M. R., Autor, D., Bessen, J. E., Brynjolfsson, E., Cebrian, M., Deming, D. J., ... & Rahwan, I. (2019). Toward understanding the impact of artificial intelligence on labor. Proceedings of the National Academy of Sciences, 116(14), 6531-6539.

Gebru, T., Hoffman, J., & Fei-Fei, L. (2021). The real threat of artificial intelligence. Foreign Affairs, 100, 94-109.

Goleman, D. (1995). Emotional intelligence. Bantam Books.

Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning. MIT press.

Griffiths, T. L., & Tenenbaum, J. B. (2019). Principles of learning and inference in probabilistic models of cognition. Cognitive Science, 43(6), e12714.

Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M., & Cohen, J. D. (2001). An fMRI investigation of emotional engagement in moral judgment. Science, 293(5537), 2105-2108.

Harteis, C., Goller, M., & Caruso, C. (2020). Conceptual change in the face of digitalization: Challenges for workplaces and workplace learning. Frontiers in Education, 5, 1-10.

Hass, R. W. (2020). How art helps us understand human intelligence: An interdisciplinary perspective. Journal of Intelligence, 8(1), 5.

Hassabis, D., Kumaran, D., Summerfield, C., & Botvinick, M. (2017). Neuroscience-inspired artificial intelligence. Neuron, 95(2), 245-258.

Hassin, R. R. (2020). Yes, we can (think): The role of construal level in cognitive control. Trends in Cognitive Sciences, 24(8), 586-599.

Huang, M. H., & Rust, R. T. (2018). Artificial intelligence in service. Journal of Service Research, 21(2), 155-172.

Immordino-Yang, M. H. (2020). Emotions, learning, and the brain: Exploring the educational implications of affective neuroscience. Mind, Brain, and Education, 14(3), 162-170.

Jordan, M. I., & Mitchell, T. M. (2015). Machine learning: Trends, perspectives, and prospects. Science, 349(6245), 255-260.

Kahneman, D. (2011). Thinking, fast and slow. Farrar, Straus and Giroux.

Klein, G. (1998). Sources of power: How people make decisions. The MIT Press.

Kremer, M. (1993). The O-Ring theory of economic development. The Quarterly Journal of Economics, 108(3), 551-575.

Kuncel, N. R., & Hezlett, S. A. (2020). The changing structure of creativity over time. Journal of Applied Psychology, 105(4), 367-385.

LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436-444.

Lin, A., Osman, M., & Ashcroft, R. (2021). Understanding cognitive biases in human decision making: A review of cognitive research. Cognitive Research: Principles and Implications, 6(1), 1-17.

Litjens, G., Kooi, T., Bejnordi, B. E., Setio, A. A. A., Ciompi, F., Ghafoorian, M., ... & Sánchez, C. I. (2017). A survey on deep learning in medical image analysis. Medical Image Analysis, 42, 60-88.

Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2), 2053951716679679.

Oakley, T., Sperry, R., & Pringle, A. (2020). The role of emotions in biological systems: Theory and research on emotion-specific activations with evolutionary function. International Journal of Environmental Research and Public Health, 17(19), 7084.

Oakley, T., Sejnowski, T. J., & McConville, C. (2020). Human intelligence and artificial intelligence: Theory and measurement. Cognitive Psychology, 122, 101281.

Oakley, T., & Sejnowski, T. J. (2019). Foreword: Emergent dynamics of artificial and human cognition. Cognitive Systems Research, 54, 1-2.

Rajkomar, A., Dean, J., & Kohane, I. (2019). Machine learning in medicine. The New England Journal of Medicine, 380(14), 1347-1358.

Schwartz, J., & Schaninger, B. (2019). Education for the future. McKinsey & Company.

Starkey, G. (2020). Creativity: How to develop your creativity skills to become a more creative leader. John Wiley & Sons.

Susskind, R., & Susskind, D. (2020). The future of the professions: How technology will transform the work of human experts. Oxford University Press.

Tyng, C. M., Amin, H. U., Saad, M. N. M., & Malik, A. S. (2017). The influences of emotion on learning and memory. Frontiers in Psychology, 8, 1454.

Vallor, S. (2016). Technology and the virtues: A philosophical guide to a future worth wanting. Oxford University Press.

Woollett, K., & Maguire, E. A. (2019). Acquiring "the Knowledge" of London's layout drives structural brain changes. Current Biology, 29(23), 3915-3922.

World Economic Forum. (2020). The future of jobs report 2020. World Economic Forum.

Zador, A. M. (2019). A critique of pure learning and what artificial neural networks can learn from animal brains. Nature Communications, 10(1), 1-11.

Zhang, L., & Chen, Y. (2020). Machine learning and artificial intelligence in the age of big data: A survey. International Journal of Environmental Research and Public Health, 17(20), 7236.