What can livecoding teach us about cyber-physical systems?

Dr. Ben Swift

01 Oct '19

outline

part un: Ben live @ 3Ai

part deux: what makes a CPS?

part trois: what can livecoding teach us about CPS?

part un: Ben live @ 3Ai

part deux: what makes a CPS?

Advanced robotics, smart grids, autonomous cars, machine learning. Cyber-physical systems are literally all around us—systems that, as they converge, will have an unprecedented economic, social and cultural impact on humanity. - from the 3Ai homepage

definition

what is a cyber-physical system?

what are the key questions?

what are the boundary cases?

part trois: what can livecoding teach us about CPS?

Extempore: The design, implementation and application of a cyber-physical programming language (Andrew Sorensen’s PhD thesis)

Advanced robotics, smart grids, autonomous cars, machine learning. Cyber-physical systems are literally all around us—systems that, as they converge, will have an unprecedented economic, social and cultural impact on humanity. - from the 3Ai homepage

autonomy

agency

potatoes

autonomy

agency

potatoes

assurance

autonomy

How do we design for an autonomous world?

This is both a technical set of questions and a philosophical and public policy set of questions. Just because we can automate something, should we? When we translate human processes into machine processes, what do we need to consider?

  • where’s the autonomy in livecoding?
  • what does failure look like?
  • what design intervention might we make to help out?

agency

How much agency do we give technology?

As the ability for machines to act independently of human oversight increases with each new tech breakthrough, conversations need to be had about how much agency we give to intelligent cyber-physical systems. Are we comfortable with machines responding to their environment, and interacting with other machines, without a human to check and validate decisions?

  • where’s the agency in livecoding?
  • what does failure look like?
  • what design intervention might we make to help out?

assurance

How do we preserve our safety and values?

Current technological progress calls for new regulatory tools and processes, as algorithms designed in different places are introduced into societies around the world. The virtual nature of these new goods and services has an impact on our ability to regulate their design and use in a way that aligns with core cultural values.

  • where’s the assurance in livecoding?
  • what does failure look like?
  • what design intervention might we make to help out?

there are some “I”s as well as the “A”s

indicators

How do we measure performance and success?

Technical systems have typically been measured on their efficiency. However, when systems start to learn and change their behaviour over time, the objective of efficiency may begin to clash with ideals that have previously been implicitly or tacitly inserted into the process by the humans in the loop. How do we start to conceptualise building for sustainability, for beauty, for values?

  • what are the indicators in livecoding?
  • what does failure look like?
  • what design intervention might we make to help out?

interfaces

How will technologies, systems and humans work together?

In previous decades, we interfaced with computational systems through a screen and a keyboard. This paradigm is already being disrupted as ‘smart’ objects enter our lives. What will it mean when AI-enabled systems are all around us, sensing and responding to us? How to do protect our privacy? What happens to all that data?

  • what are the interfaces in livecoding?
  • what does failure look like?
  • what design intervention might we make to help out?

intent

Why has the system been constructed? by whom? and for what purposes?

It is sometimes tempting to think of a single, monolithic AI. However, AIs will be built for different purposes and with very different intentionality, and inside different larger systems. Making sense of, and mapping, that broader intentionality is central to the emergent new applied science.

  • what is the intent in livecoding?
  • what does failure look like?
  • what design intervention might we make to help out?

open questions

  • if/when is a (textual) code interface the best option for balancing agency/autonomy/assurance… in livecoding? and beyond?

  • what feedback can we provide to help the livecoder stay “on top” of the autonomous processes? is that even desirable?

  • how does the audience fit in? do they matter? what’s their agency?

  • when do static analyses help, and when do they get in the way? what about “AI” helpers?

what’s next?

these questions keep me up at night

if you’d like to help (or just to hang out with the c/c/c group more generally) then let me know 😊