Autopoiesis: A photographic project

27 October 2018

Tags : english, art, sciart, self, identity, photography, visual

What is 'Autopoiesis'?

I am working on a visual project called 'Autopoiesis' for the SciArt Lab.

It brings together the quest for personal identity (it is a self-reflection) while exploring visual concepts I have previously studied during my former research.

Ideas such as Complex Adaptive Systems, self-organization and emergence in biological and social systems are explored through the camera lenses in combination with a deep examination of my own phenomenological experience.

Some pictures of the project

Theoretical Context

'Autopoiesis' is just a new answer to an old set of questions. Rather than being a fully brand-new experiment, it follows the theoretical steps of my former research from an art-centered approach.

As a multidisciplinary researcher, defining the boundaries of my exploration is not quite easy. I have explored both the nature of the self and the field of complex adaptive systems from mathematical, computational and sociological perspectives. I have also played with anthropology, ethnography and phenomenology trying to find clues about how identity formation takes place in social agents.

'Autopoiesis' is just a new stage of a philosophical inquiry which started long time ago.

Personal path

Artificial Intelligence, Systems Biology and Philosophy of Mind

Some years ago, after finishing a bachelor in Computer Systems Engineering, I moved to Madrid to study a Master in Artificial Intelligence. I started to learn about how bots, neurons or ant swarms could develop emergent properties. During a a couple of years I was coding basic-AI software, such as agent-based models and cellular automata. Coding software in which something we called intelligence could be observed.

I also discovered the field of Philosophy of Mind and how concepts like cognition, consciousness, self, complexity and self-organization could be studied from a scientific and analytical perspective.

Fascinated by this new field, I realized that there was an available position in college to study artificial consciousness. Unfortunately, the main researcher never replied my emails. So I finally started to work as a researcher in a synthetic biology project within the context of an unconventional computing lab.

Eventually, I found myself working in an research group, exploring communication mechanisms between bacteria. Our goal was to build a biological computer in which bacteria were "reprogrammed genetically" in order to perform human-defined tasks like an electronic computer would do. My work basically consisted in the implementation of a simulator in which we could test how those bacteria would behave and predict the evolution of the system.

After some time working in the lab I would left the systems biology field to work as software engineer for different companies, universities and organizations. However, that experience changed my perception about reality forever. The complexity of communicaton mechanisms and the nature of interdependence in living organisms would stick in my mind for the rest of my life.

Computer Science and Biology were not that different: both explored information processing systems. During the next years, complex networks became the most fascinating of the methapores to understand how the world works.

From Bacteria to Humans: the discovery of Complex Adaptive Systems

Developing my career as software engineer and doing some side research was a priority at the time. But I was a young guy, ready to change the world. Worried about inequality and social justice, I pursued another degree in Cooperation for Development, something not related at all with my former academic education.

Some time after traveling to Bolivia as a volunteer and visiting some NGOs' projects in Senegal, I started to see how social systems could be conceptualized under the same umbrella than biological and computational systems: complex adaptive systems.

In my mind everything started to make sense. As information-processing beings, cells, individuals and societies could be understood as the foundation of several layers of interdependent elements, of emergent properties, of complexity.

The building blocks of life, individuals and societies were just self-organized units with an autopoietic character, a combination of functional diversity and decentralized communication infrastructures. Physical units exchanging flows of ions, chemicals, bits, words or photons.

Some years later, I got a PhD in Information Science after doing some computational modeling of complex adaptive systems and writing several articles, book chapters and a dissertation about peer-to-peer systems.

Some of the theoretical and formal justifications which are the basis of 'Autopoiesis' can therefore be found in my doctoral dissertation, full of artificial life theory, P2P networks, bacteria-based algorithms and sociological models. My dissertation was a formal and theoretical defense of P2P dynamics and was controversial at the time, some years before blockchain technologies, protocols like Dat or IPFS and the new ideas about the distributed web started to become more mainstream.

Finding a multidisciplinary path at the University of Toronto

I had the chance of staying as Visiting Researcher at the Critical Making Lab of the University of Toronto (UofT) during the fall of 2014. I arrived without a clear project in mind, looking for an immersive experience in order to open my research to new perspectives while finishing my PhD.

At the time, I was studying how peer-to-peer (P2P) dynamics could be a source of collective production of knowledge. So I thought that a short period as a participant in such a specific environment, full of Canadian critical thinkers and makers, would help me to understand the Critical Making approach and its possible implications for a P2P society.

Coming from the field of Computer Science, an specifically from Artificial Intelligence, my research practices and methodological frameworks were mostly quantitative. A large part of my time before going to Toronto I was focused on designing models and algorithms, coding them and analyzing resulting data. In that sense, my study of social phenomena was partially reduced to the analysis of mathematical models and their computational implementations, that is, the simulation of artificial minds and artificial societies.

The focus of my dissertation was far away from actual societies, as a result of considerable simplifications to fit in the criterion of scientific falsifiability. Popper’s legacy sometimes implies a reductionist perspective; and when the object of study is the social dimension of the human being, the researcher has to dramatically reduce the number of variables.

I was already a guy who rejected the compartmentalization of knowledge and embraced a multidisciplinary project of life. In fact, while working as consultant and developer I studied a wide variety of topics, from Learning Theories to Philosophy of Mind, Cognitive Science or Evolutionary Biology. I had meetings with cognitive psychologists, economists, biologists and physicists. And I learned a lot but, as said, my approach was always quantitative.

However, the experience at Semaphore Research Cluster of the University of Toronto implied a completely different mindset, a chance to “experience” without methodological constraints. The result was kind of an anarchical grounded theory approach that allowed me to observe and participate as a member of the team.

UofT was a game-changer. I had meetings with philosophers, artists, historians, sociologists, designers,… even a cartoonist and a film maker. Those human-based P2P dynamics really opened my mind to the actual meaning of the word “knowledge" (so much more than my computational simulations).

I learned the basis of 3D printing and physical computing. I learned about situated learning. I could see how artists and academics with a background in humanities explored new materials, electronic devices and micro-controllers with extraordinary skills. I tested sensors and actuators and got a sense of what it is possible to make, discussing with everyone about potential projects, critical issues and cultural engagements.

In a sense, the Critical Making Lab destroyed part of my mental constraints. It provided new intellectual and hands-on tools, new human experiences and an open and critical mindset. It also gave me a clue about what information means and about the implications of technology in critical reflection and social transformation.

My research and my projects never again could fit in one specific field. No technology without humanities. No science without arts.

Postdoctoral research

Discovering visual ethnography

After my PhD, I moved to the United States and started to develop the SciArt Lab while in residence at the LIS Department of the University of North Carolina at Greensboro.

Without the constraints of a doctoral program, I decided to go deeper in some of my old questions. It was time to move forward, beyond software, neurons and genes. Explore another layer of complex adaptive systems and social interactions. Jump to the real world.

On the top of genetics and cognition, human beings have an extra layer of information processing: culture. And the US was full of social agents with very different social contexts and cultural backgrounds. It was a multicultural society.

Multiculturalism was a fascinating topic, as appealing as biodiversity, as appealing as the distributed and heterogeneous P2P networks of my PhD. A new field of research. The project 'Rhizome Ethnographies' was born:

After several years exploring, from a computational perspective, the evolution of social complexity in heterogeneous and decentralized agent-based models, I have decided to start a new stage of research. With this side project, I would like to study the evolution of identity/identities in real social agents. Specifically exploring heterogeneous societies through the observation of their ethnic, linguistic and cultural contexts, and using visual ethnography to learn about transnational identities and multiculturalism.

So let’s say that I am moving from the fields of Computer Science and Information Science to a new multidisciplinary approach closer to the domains of Sociology and Cultural Anthropology. From software simulations of artificial agents to multicultural communities of actual people.

'Rhizome Ethnographies' was a two years experiment which ended with a documentary about the US and a multimedia website. But it was also an starting point to a new life of self-conducted research under the framework of the SciArt Lab.

Considering this new stage as a chosen path for knowledge crawling, I would consider it both research, game and art. Besides that, naturally, I would also consider this project a way of self-exploration. […​] I realized that I want to know more about the human being as a social construct. And I want to know it from my own subjective point of view, the point of view of another social construct.

The first part of my documentary 'Looking for Identity' was released in 2017. It can be watched for free here.

The theoretical and visual outcomes of the Rhizome Ethnographies project: can also be found here.

Discovering phenomenological research

During the first years of the SciArt Lab, I developed other projects in which there were not clear boundaries between art, science and technology. I published proofs-of-concept, software experiments and aesthetic projects without worrying too much about theoretical frameworks or academic constraints.

I implemented cellular automata with emergent properties for digital music composition, generative visual art tools, open-source virtual reality components,…​ Slowly, first-person experience became more important than objectivity. Music and visual arts became a tool for my own self-exploration.

After couple of years living in the US, the quest for social agents' identity in a multicultural society, and my own immersion in a different environment, implied a lot of cognitive experiences, cultural challenges and the gradual deconstruction of my own 'persona'.

Coming back to Spain and having to face a reverse culture shock made it even more clear. 'Who I am?' became the most complex and fascinating question.

The word "identity" started to replace the word "society" in my research priorities. Understand my own first-person experience looked more challenging and difficult than explore others' behavior or identity formation. Even the environment looked more interesting if understood from a pure phenomenological perspective.

This explanation can be useful to understand the theoretical foundations of the project and how it is affected by my personal/academic/professional path and its current stage.

  • Currently, I work as Blockchain Expert for Enxendra Technologies, where I lead Research and Development tasks regarding Distributed Technologies. I am responsible of the evolution of our technological stack in terms of decentralization and blockchain-based innovation, towards a model in which traditional third-party solutions can be replaced by P2P technologies. Basically, pushing enterprises towards the complex adaptive systems paradigm, towards a model based on freedom and self-organization rather than centralization.

  • A the same time I am focused in the experiments of the SciArt Lab, exploring from a multidisciplinary perspective different aspects of human potential, embracing both science, technology and humanities. Trying to participate in the collective production of knowledge and tools for a P2P society in which distributed networks and virtual reality will provide a new playground for our species, a new playground for creative experiences: the distributed metaverse.

  • Additionally, as an endless side project, I pursue the quest for personal identity through different tools, specially photography. Trying to understand the node in order to transform the network.

So complex. So simple. Autopoiesis.

Note
Some thumbnails of this project are posted in my Instagram account (xmunch) with the hashtag #autopoiesissciartlab. You can subscribe to my feed or to the hashtag itself to be updated. For prints or a photo exhibit you can contact me directly to xmunch@xmunch.com

Building a Lab in a Distributed Metaverse.

08 December 2017

Tags : programming, metaverse, webvr, vr, code, coding, sciart, aframe, english, open-source, distributed, p2p, decentraland, decentralized

The Metaverse is a collective virtual shared space, created by the convergence of virtually enhanced physical reality and physically persistent virtual space, including the sum of all virtual worlds, augmented reality, and the internet. The word metaverse is a portmanteau of the prefix "meta" (meaning "beyond") and "universe" and is typically used to describe the concept of a future iteration of the internet, made up of persistent, shared, 3D virtual spaces linked into a perceived virtual universe.

The term was coined in Neal Stephenson’s 1992 science fiction novel Snow Crash, where humans, as avatars, interact with each other and software agents, in a three-dimensional space that uses the metaphor of the real world. Stephenson used the term to describe a virtual reality-based successor to the Internet

Extracted from Wikipedia

Towards a Virtual Reality

During the last year we have been exploring how the SciArt Lab could contribute to the emergence of the distributed metaverse. We had the chance of testing new virtual reality open-source technologies and develop several WebVR components with a triple goal:

  • Create our own immersive experiments in virtual reality for phenomenological and artistic explorations.

  • Contribute to the upcoming 3D Web, sharing content and open-source components with the idea of a future distributed metaverse in mind (built on the top of a-frame + ipfs).

  • Build the facilities of the SciArt Lab in a virtual space, in order to have a place always accessible, independently of our location in the physical world.

Some of those experiments have been already released in social media. However, it may be useful to summarize at least some of our last achievements in a blog post.

The potential of WebVR is amazing. It leads to new opportunities for developers and content creators, fixing the problem of interoperability between three-dimensional environments by bringing the community together around web standards (compatible with any virtual reality hardware and accessible from any web browser).

Regarding the role of "decentralization" in the co-production of a metaverse, we should mention that the decentralized web is becoming achievable in the short term. If we take into account the evolution of protocols and technologies for distributed consensus or data storage, we can not deny that the dream of a distributed (completely decentralized) virtual reality is easier than ever before.

The use of new peer-to-peer (P2P) protocols such as IPFS, in conjunction with the Ethereum network or other blockchain-based systems, would make the creation of alternative societies, distributed and transnational, a plausible possibility. As I have been defending in the last decade, P2P technologies are empowering tools which may reshape our societies, and the keystone to implement a cyberspace able to guarantee open innovation and knowledge production. If we combine the potential of peer-to-peer with the immersive experience that virtual reality can provide, we may obtain a mindblowing outcome: a world-wide distibuted metaverse.

SciArt Lab Metaverse Components

The following video displays part of the capabilities of our initial experiments in terms of immersive graphics and interaction with gaze controlled events and teleporting.

Additionally, we have been exploring new possibilities for music creation in a 3D virtual world, along with other projects related with music and technology.

So far, our experiments combining music + VR are straightforward a-frame components which may evolve within the context of more specific projects.

The following tweets are some examples of the prototypes we have been developing lately. The code will be updated in Github (in my repo or the one of the SciArt Lab).

From the SciArt Lab Metaverse components to Decentraland

After some of the SciArt Lab Metaverse components were published in Github and Twitter, we were mentioned a couple of times in the Week of A-Frame series (supported by Mozilla).

Some weeks later, we were contacted by the team of Decentraland offering support for our project. That was a really exciting new, a really good chance for the near future of the SciArt Lab.

Decentraland is one of the most promising projects of the new blockchain-based startup ecosystem. They raised $24 million during their ICO. Forbes wrote an article about them recently, remarking how new economies may emerge very soon in virtual worlds.

Decentraland combines the possibilities of open standards, WebVR and decentralized technologies within a conceptual and economic framework: shared ownership of the platform.

Here there is an introduction from their website:

Decentraland is a virtual reality platform powered by the Ethereum blockchain. Users can create, experience, and monetize content and applications.

Decentraland is the first virtual platform owned by its users. Grab a VR headset or use your web browser and become completely immersed in a 3D, interactive world. There are plenty of opportunities to explore or even create your own piece of the universe. Here, you can purchase land through the Ethereum blockchain, creating an immutable record of ownership. No one can limit what you build.

With full control over your land, you can create unique experiences unlike anything in existence. Your imagination is the limit: go to a casino, watch live music, attend a workshop, shop with friends, start a business, test drive a car, visit an underwater resort, and much, much more—all within a 360-degree, virtual world.

And this is a promotional video of the project:

After talking with them, we agreed to collaborate and build during 2018 the SciArt Lab Metaverse Branch as a district in Decentraland. They will provide the resources that we need to make this happen, so this opens a new period for the lab, in which patronage and partnerships will make our projects more sustainable.

We have recently initiated a partnership with Magg Architecture for the construction of our facility in the metaverse. More information about the evolution of the project will be published soon.

sciartlabHeadquarters

Digital creation for hackers & makers: prototyping musical tools from the SciArt Lab.

01 June 2017

Tags : programming, midi, music, code, coding, sciart, guitar, alda-tabs, english, open-source, software, ableton

Note
One of the main goals of the SciArt Lab is the open exploration of innovative ideas from a maker/hacker perspective, finding innovation through prototyping rather than relying on mere theoretical approaches. In that sense, we try to combine disruptive technologies and scientific knowledge with unconventional developments and real implementations/prototypes of our ideas. If you want to know more about the research of the SciArt Lab check our website.

What is this article about?

This article is an introduction to some of the projects which have been developed by the SciArt Lab around topics related with digital musical creation.

In this post I will summarize part of my hands-on experience based on the intersection of DIY electronics, MIDI controllers, and the development of new tools (coded in Java, Groovy, Processing, Javascript) in combination with existing software such as Ableton Live.

This is an on-going exploration, so follow us on Twitter and keep updated in the near future.

Music and digital creation

I can summarize the current projects of the SciArt Lab as a set of fun experiments.

Basically, we are hacking music with both sound synthesis, MIDI experiments, DIY electronics and algorithmic composition, combining experimental music with brand new technologies. Discovering how coding and music can be combined by prototyping domain-specific languages, enabling self-composed songs with genetic algorithms or re-discovering MIDI controllers to create audio art.

A. Genetic algorithms, mathematical compositions and generative music

We are exploring the potential of applying Artificial Intelligence techniques and software development to create programs which are able to write and play their own music.

Take a look of our first experiments, watching our videos of cellular automata with emergent properties for music composition.

Each cellular automaton is able to compose its own music based on simple rules, evolving while it plays synthetic instruments in Ableton Live or external devices through MIDI events.

B. Domain Specific Languages

Alda-tabs is the first Domain Specific Language for Guitar Players in the Java Virtual Machine. This piece of software can help guitar players to “execute” their music notes in the JVM, write songs and get audio feedback with basic tab syntax. You can read more about this in this article.

Take a look of the potential of Alda-tabs with chords and arpeggios listening this example (code also provided):

./alda-tabs.sh examples/01-guitartabs-example.alda

C. Digital instruments and physical interfaces

A couple of years ago, when I was working as Visiting Researcher at the Critical Making Lab (University of Toronto), I discovered how a humanistic-based approach to DIY electronics, coding and making could change forever my conception on research. That experience helped me to see the importance of hands-on learning and the role that tangible objects could have for theoretical or intellectual explorations.

Currently I am working on a prototype of a physical MIDI interface to control digital instruments directly from a guitar fret. This same concept will be explored with different objects and materials (conductive or not) in the following months.

The idea is to go beyond the keyboard as the standard digital music interface and build physical MIDI controllers with wood, cardboard, fabric, etc. More details about this project will be published soon.

In the meantime, I have been also testing some libraries of Javascript for sound synthesis, and playing around with p5.js to develop the foundations of SoundBox, an experimental digital environment for synthetic music creation. Basically, the idea with this tool is to transform a human voice or an instrument (through a microphone) in a MIDI interface. Right now, it basically detects the fundamental frequency of the microphone’s sound signal, allowing the user to transform a continuous signal in a set of discrete notes. It also parses that information and reproduces the played sequence in Sin Oscillator.

It is a very straightforward prototype with troubles with some harmonics, but it has been a good experiment to learn about how these issues work. Let’s see, but maybe SoundBox is the starting point for something more sophisticated.

img2

D. Music Visualization

One of the research interests of the SciArt Lab is information visualization in unusual ways. I always was fascinated about synesthesia and lately I have been testing visual approaches to music. The idea with some of the prototypes I have been working in is to map MIDI inputs both with physical visualizations (i.a. LEDs) and computational ones.

In this second category, I have been testing the possibility of creating my own 2D graphics with Processing and SVG and animate them while controling their movements and shapes directly from external MIDI inputs. This is one example of one program/animation that I have implemented recently:

In the previous example, an animation is created dynamically in Processing while the behavior of an animated cartoon responds to the inputs received from an external DIY electronics device. Both the graphics and the sound are produced by the orders received through MIDI inputs.

E. Postsynaptic Symphonies

I have always liked music. I started writing songs with a keyboard as a kid and continued with a guitar when I was a teenager. Nowadays, I enjoy playing several kinds of instruments. Besides the keyboard, my acoustic guitar and my wife’s classical guitar, I have two harmonicas, some flutes, an ocarina, an ukulele and a guitalele.

Recently, as part of the open-ended exploration of the SciArt Lab, I have been writing also some digital music. I call them postynaptic symphonies because I found interesting the cognitive experience of listening that sort of unpredictable songs.

I have published some postynaptic symphonies in SoundCloud:

More information about the evolution of my music-related projects will be coming soon :)

Alda-tabs: Domain Specific Language for Guitar Players in the Java Virtual Machine

13 April 2017

Tags : english, open-source, music, code, jvm, sciart, guitar, alda-tabs, software

Note
This post explains how to easily compose music with alda-tabs, a Domain Specific Language for Guitar Players which runs in the JVM. I have developed alda-tabs as an open source project so you can download it for free in GitHub.

What is alda-tabs?

  • It is a Domain Specific Language for guitar players.

  • It is a piece of software to help guitar players to "execute" their music notes in the JVM, compose songs and get audio feedback.

  • It is an extensible tool for music programming mainly oriented to guitar players.

  • It is built on the top of Alda, a DSL for music composition in the JVM, so it is compatible with both Alda and Clojure.

Why is so easy to code guitar songs with alda-tabs?

  • It does not require programming skills.

  • It does not require traditional music notation.

  • It is as straightforward as writing simple guitar sketches in a notebook.

  • You only have to copy your tabs from the paper to a text editor and execute alda-tabs.

alda tabs

How can I create complex digital music with alda-tabs?

  • With alda-tabs you can execute any .alda file, so you can write your songs/programs in both Clojure, Alda and alda-tabs in the same block of text.

  • It talks to the JVM, so any experimented programmer can do impossible things :-)

  • It is just a layer on the top of Alda, so if you know music theory, then you can write complex songs using music notation.

How is the alda-tabs syntax?

Note
Remember that with alda-tabs you can always use the standard Alda syntax and Clojure code. You can learn more about both languages later to explore the whole potential of alda-tabs. But don’t worry, you don’t need to know more yet. Just follow this tutorial and enjoy :)

What I am gonna show you here is the easy and super simple alda-tabs syntax:

  • The tab notation

  • The chord notation

In 10 minutes you will be able to write songs in a text editor and listen the result in your speakers.

Tab notation

Imagine that you want to play all the strings of the guitar, one after another:

score1

This example is a regular guitar tab in which all the strings are played sequentially with one hand and in which there are not fingers of the other hand pressing the fret.

So the fret number would be 0 in the six positions of the sequence.

How would we write this in Alda syntax?

Don’t worry, I will explain how to do it in alda-tabs syntax bellow (see How to do it in alda-tabs?) but it is important to read this before to compare alda-tabs with Alda.

In Alda syntax we would need to know the note equivalents of each position. And in addition:

  • We would write the octave and the note, one after another:

guitar: o4 e o3 b o3 g o3 d o2 a o2 e
  • Another way would be to write the initial octave increasing/decreasing it when needed:

guitar: o4 e/>b/g/d/<<a/e

How to do it in alda-tabs?

Remember that alda-tabs is based in the simple concept of a tab. Basically, the notes of a guitar can be defined by numeric combinations, a number to identify the string (from the first at the bottom to the sixth at the top) and a fret.

To write a note in alda-tabs you only have to write ta followed by the string number and the fret number.

With alda-tabs we can write the same sequence that we have previously expressed in Alda. But this time we don’t need to know which note we are playing, we only need to write the tab, the position of our finger considering the string and the fret:

 guitar: ta10 ta20 ta30 ta40 ta50 ta60

In this example, we are asking the JMV to play a guitar with the open strings 1, 2, 3, 4, 5 and 6, one after another. That is, ta10 equals string 1 and fret 0 and so on.

Take a look now of the fretboard:

notes fret

If you want to play the first note C, according to the graphic displayed above, you don’t need to know the octave, you just will pick the string 2 in the fret 1: ta + 2 + 1.

guitar: ta21

You can also modify the duration of a note adding a character at the end. For example:

guitar: ta21 ta21W ta21Q ta21D ta21H

What does it mean? If you don’t specify a duration, this will be whole beat (W). You can also play the note during half beat (H), double (D) and quarter (Q). Those note durations will be proportional to the tempo of the score. For example, the following two sentences are not the same:

(tempo 100)
guitar: ta21 ta21W ta21Q ta21D ta21H
(tempo 300)
guitar: ta21 ta21W ta21Q ta21D ta21H

Play with these combinations to see the difference. For more complex timing, check the advanced tips bellow.

Chord notation

Imagine that rather than a sequence of notes you want to play a chord. A basic example would be playing all the open strings at the same time:

score2

You can do this in three ways:

  • In Alda syntax, using the character / to play the notes at the same time:

guitar: o4 e/>b/g/d/<<a/e
  • In alta-tabs syntax, using the tab notation with different voices:

guitar: V1: ta10 V2: ta20 V3: ta30 V4: ta40 V5: ta50 V6: ta60
  • In alta-tabs syntax, but using the chord notation:

guitar: (c 0 0 0 0 0 0 W)

As you can see, the chord notation is just a Clojure function c with seven parameters, the fret of each one of the six strings and the duration of the chord.

For example, the D chord would be

(c 2 3 2 0 x x W)

re

You can also use the chord notation to play single notes. For example, the two following sequences are exactly the same:

# alta-tab syntax

ta10 ta20 ta30 ta40 ta50 ta60

# alda-tab chord syntax

(c 0 x x x x x W)
(c x 0 x x x x W)
(c x x 0 x x x W)
(c x x x 0 x x W)
(c x x x x 0 x W)
(c x x x x x 0 W)

Advanced tips

You can play tabs with specific durations, in seconds or milliseconds by using the function t. In this case you should write the number of the string, followed by a dot and the fret. Add the end, you should express in String format ("") the duration you want.

guitar: (t 1.0 "2s") (t 2.0 "10ms") (t 2.2 "100ms")

You can do the same with your chords:

guitar: (c 0 x 1 2 2 x "5s")

How can I install alta-tabs?

  • Follow the steps to install Alda.

  • Clone this repo and open the folder alda-tabs in your terminal.

  • Run the Alda server with alda up.

  • Create a simple text file, write your song using the alda-tabs syntax and save it.

  • Execute ./alda-tabs.sh followed by the path of the file you want to play.

  • Listen the result.

  • If you want to stop a song you can stop the alda server with alda down.

Note
You can also play some scores (provided in the /examples folder) and modify their content to explore different sounds.

Examples

Note
In this document you can both read the code and listen the output of its execution in alda-tabs. However, the provided audio file is non-stereo. The original output of alda-tabs, however, includes panning. So explore the real result executing the code/song in your own instance of alda-tabs.

Example 1: Chords and arpeggios

You can start exploring the potential of alda-tabs with chords and arpeggios with the example #01:

./alda-tabs.sh examples/01-guitartabs-example.alda

Example 2: The sound of Pi

Try to compose mathematical songs extending Alda and alda-tabs with Clojure. See the example #02:

./alda-tabs.sh examples/02-pi.alda

Example 3: Complex songs

You can also see how beautiful songs with multiple instruments can be written with Alda with the example #03:

./alda-tabs.sh examples/03-hope-for-future-ext.alda