On livecoding annotations and visualisations
31 Jan 19
Iāve just finished reading Charlie Robertsā
interactive web essay on annotations and visualisations for live code.
If you havenāt read it, go do that nowāand make sure you have some
speakers/headphones to listen to the code examples, because it would be a real
shame to read it without watching/listening to his ideas in action. All the code
samples (with the visual annotations) are live-editable, both running āwhole
blocksā and re-evaluating individual expressions with ctrl+enter.
The essay lays out some guiding principles:
There are three principles that guide the design of the presented annotations and visualizations.
- If a value changes over time (typically related to musical progression or signal processing), display its current state, either by adding a visualization / annotation or by modifying the source code to reflect the current value.
- Make annotations and visualizations as proximal as possible to the code fragment responsible for generating the data they are representing.
- In addition to displaying when values change, whenever possible also provide some indication of when a value is being read, in particular when there is a direct affect on musical / sonic output.
It then goes on to show some example visualisation in gibberwocky, both in an āall inā demo and then in bite-sized listings which show off the different specific ideas. In general, I really like the ideas, and itās something Andrew Sorensen and I have written about before in our paper Visual Code Annotations for Cyberphysical Programming (2013) in the 1st International Workshop on Live Programming (LIVE) (itās reference #4 in the essay). Iāll refer to this a bit in this post, so letās call it āthe LIVE paperā.
I also think that the three points listed above are pretty solid, especially in a multimedia livecoding context (maybe even in a broader context). One thing I like about the visual annotations provided is that theyāre mostly ASCII (or ASCII-ish). This is not so important when deploying them in the web browser (since you can do so much fancy styling stuff with CSS & js these days) but itās really important when dealing with⦠ahem, more venerable editors. I ended up having to use some unholy Emacs hacks with overlays to get the original annotations discussed in the LIVE paper working.
I think that displaying hidden state in comments is a good compromise, and
avoids the need for fancy āextra-textualā overlays. Not that overlays arenāt
sometimes useful, but thereās a lot you can show with inline text-decoration
hacks and adding a few comments to provide ascii text to decorate (when it isnāt
explicitly represented in the first place).
But, (you can probably tell that there was going to be a ābutā somewhere)
many of the cool annotations displayed donāt work while the code is being
edited1. In some cases they actually break the code (try editing one of
the 'x*ox*xo--x*x*o' patterns while the codeās runningāyouāll end up with
new stuff in your pattern that you didnāt put there).
The problem isnāt so obvious when all the code listings are fully-formed on page load, and you can just play them as is. But when you try and mess with the code then youāll see what I mean (again, I really suggest that you try itāitās super-cool being able to mess around with the live code in the browser).
I feel this particularly keenly because Iām a clean-slate livecoder (as is Charlie), so Iām always moving through an incomplete code state until I get something which will even run. This isnāt just a problem for clean-slate livecoding, thoughāeven tweaks to existing code which introduce ābadā code states (from the visualisationās perspective) will cause these issues.
We talked about this in the LIVE paperāthe fact that thereās a distinction between the āstate of the worldā vs āstate of the codeā. This is a fundamental challenge for the sorts of inline code visualisation/annotations shown in the essay, because itās using the code as the āraw materialā for displaying information about the world (beat/timing and other āhidden variablesā, audio engine state and output, etc.).
There are a few different ways to tackle this problem:
- mark some annotations as āsafeā for code being edited, and some āunsafeā
- when code is being edited, turn off all annotations (or at least for that expression)
- some sort of āgrand unified theoryā of the delta between the current code state and the current execution state, and reconcile these to provide a maximal set of acceptable visualisations (this approach includes a coq program to formally verify that youāre doing everything right)
I think the pragmatic choice is #2, as long as the ādisabled editsā section is kept as small as possible. Itās still going to be a pain, though, because when Iām livecoding Iām tweaking stuff all the time, so itās likely that a lot of the code will spend a lot of the time with the visualisations disabled.
Iāll leave #3 up to people smarter than me š (although perhaps there are heuristics which could do a decent job).
Finally, I can identify with Charlie when he writes:
But, in the end, the feedback provided by these annotations and visualizations have become a critical part of my live coding practice.
I get thisāin my experience (when Iāve had even more limited annotations than the ones he shows in the essay) are just as much for my benefit as the audience. Livecoding is hard, and any extra information you can get about whatās going on with your code is super helpful.
In principle visual code annotations can be even more useful to the livecoder because they allow her to āauditionā algorithmic changes to the code without2 actually eval-ing the code and changing the music. Hereās an example from the essay:
Euclid(9, 16); /* 1010110101011010 */
I might not always know what the 9,6 euclidean rhythm is, but Iām by now
fairly used to looking at 10100010 sequences and āhearingā the rhythm in my
head. I could poke around with the parameters in a live set, exploring the
parameter space (and thinking through the effect itāll have on the music) and
then only evaluating the code when Iām satisfied. Thatās super powerfulāthe
equivalent of the DJ cueing the next track with one can on their ear, and one
ear in the clubāsomething which I donāt have currently in my livecoding setup
(although others might).
You can probably tell that I think thereās a productive research agenda hereāand I hope Charlie continues with it. I hope to help out myself, too. I guess my main point is just to shout from the rooftops:
any code visualisation/annotation techniques must be robust for code which is currently being edited
Iām not just talking about technical issues, either; obviously any demo/prototype is going to have those, but theyāre fixable. I think there are deeper issues with trying to use live text as both the description of program behaviour and as a āviewā on the hidden state of the program.
Anyway, this is just a blog post, so Iām off the hook with regard to rigour, accountability and just general good scholarship, right š
As an addendum, a few thoughts on web publishing. I love that this essay/paper is published onlineāthe interactive examples are crucial to getting the point across. I know that some conferences & journals these days allow html submission (nicer for reading on mobile, anyway) and other multimedia artefacts (audio/video recordings) but itās still hard to get traction for this sort of rich, interactive in-browser work. The fact that at the end Charlie has to say:
If youāre going to cite this website in an academic paper, please consider also citing either reference #1 or reference #7 given above; citations of such papers count more in academia than citations of a website. Plus, thereās further information in them not covered in this essay. Thank you!
Oh well. Mad props to Charlie for putting this out there for comment and discussion, and hopefully there are a new generation of publications3 where this stuff can be front-and-centre, not just a weird āsupplemental web materialsā section to a traditional pdf.
#Footnotes
-
Charlie, if Iām doing it wrong, please let me know :) ā©
-
Yes, I know that the real-time feedback of hearing the sound is crucial, and Iām not for a second saying that we do away with it, but there are some situations where I want to check what the result of an algorithmic/parameter change might be without inflicting it on the audience. ā©
-
Distill is great, but itās pretty DL/AI-focussed. The livecoding community needs something similar (although it does privilege livecoding environments which work in the brower, so thatās not ideal either. Hmm.) ā©
Cite this post
@online{swift2019onLivecodingAnnotationsAndVisualisations,
author = {Ben Swift},
title = {On livecoding annotations and visualisations},
url = {https://benswift.me/blog/2019/01/31/on-livecoding-annotations-and-visualisations/},
year = {2019},
month = {01},
note = {AT-URI: at://did:plc:tevykrhi4kibtsipzci76d76/site.standard.document/2019-01-31-on-livecoding-annotations-and-visualisations},
}