Building a Lab in a Distributed Metaverse.

08 December 2017

Tags : programming, metaverse, webvr, vr, code, coding, sciart, aframe, english, open-source, distributed, p2p, decentraland, decentralized

The Metaverse is a collective virtual shared space, created by the convergence of virtually enhanced physical reality and physically persistent virtual space, including the sum of all virtual worlds, augmented reality, and the internet. The word metaverse is a portmanteau of the prefix "meta" (meaning "beyond") and "universe" and is typically used to describe the concept of a future iteration of the internet, made up of persistent, shared, 3D virtual spaces linked into a perceived virtual universe.

The term was coined in Neal Stephenson’s 1992 science fiction novel Snow Crash, where humans, as avatars, interact with each other and software agents, in a three-dimensional space that uses the metaphor of the real world. Stephenson used the term to describe a virtual reality-based successor to the Internet

Extracted from Wikipedia

Towards a Virtual Reality

During the last year we have been exploring how the SciArt Lab could contribute to the emergence of the distributed metaverse. We had the chance of testing new virtual reality open-source technologies and develop several WebVR components with a triple goal:

  • Create our own immersive experiments in virtual reality for phenomenological and artistic explorations.

  • Contribute to the upcoming 3D Web, sharing content and open-source components with the idea of a future distributed metaverse in mind (built on the top of a-frame + ipfs).

  • Build the facilities of the SciArt Lab in a virtual space, in order to have a place always accessible, independently of our location in the physical world.

Some of those experiments have been already released in social media. However, it may be useful to summarize at least some of our last achievements in a blog post.

The potential of WebVR is amazing. It leads to new opportunities for developers and content creators, fixing the problem of interoperability between three-dimensional environments by bringing the community together around web standards (compatible with any virtual reality hardware and accessible from any web browser).

Regarding the role of "decentralization" in the co-production of a metaverse, we should mention that the decentralized web is becoming achievable in the short term. If we take into account the evolution of protocols and technologies for distributed consensus or data storage, we can not deny that the dream of a distributed (completely decentralized) virtual reality is easier than ever before.

The use of new peer-to-peer (P2P) protocols such as IPFS, in conjunction with the Ethereum network or other blockchain-based systems, would make the creation of alternative societies, distributed and transnational, a plausible possibility. As I have been defending in the last decade, P2P technologies are empowering tools which may reshape our societies, and the keystone to implement a cyberspace able to guarantee open innovation and knowledge production. If we combine the potential of peer-to-peer with the immersive experience that virtual reality can provide, we may obtain a mindblowing outcome: a world-wide distibuted metaverse.

SciArt Lab Metaverse Components

The following video displays part of the capabilities of our initial experiments in terms of immersive graphics and interaction with gaze controlled events and teleporting.

Additionally, we have been exploring new possibilities for music creation in a 3D virtual world, along with other projects related with music and technology.

So far, our experiments combining music + VR are straightforward a-frame components which may evolve within the context of more specific projects.

The following tweets are some examples of the prototypes we have been developing lately. The code will be updated in Github (in my repo or the one of the SciArt Lab).

From the SciArt Lab Metaverse components to Decentraland

After some of the SciArt Lab Metaverse components were published in Github and Twitter, we were mentioned a couple of times in the Week of A-Frame series (supported by Mozilla).

Some weeks later, we were contacted by the team of Decentraland offering support for our project. That was a really exciting new, a really good chance for the near future of the SciArt Lab.

Decentraland is one of the most promising projects of the new blockchain-based startup ecosystem. They raised $24 million during their ICO. Forbes wrote an article about them recently, remarking how new economies may emerge very soon in virtual worlds.

Decentraland combines the possibilities of open standards, WebVR and decentralized technologies within a conceptual and economic framework: shared ownership of the platform.

Here there is an introduction from their website:

Decentraland is a virtual reality platform powered by the Ethereum blockchain. Users can create, experience, and monetize content and applications.

Decentraland is the first virtual platform owned by its users. Grab a VR headset or use your web browser and become completely immersed in a 3D, interactive world. There are plenty of opportunities to explore or even create your own piece of the universe. Here, you can purchase land through the Ethereum blockchain, creating an immutable record of ownership. No one can limit what you build.

With full control over your land, you can create unique experiences unlike anything in existence. Your imagination is the limit: go to a casino, watch live music, attend a workshop, shop with friends, start a business, test drive a car, visit an underwater resort, and much, much more—all within a 360-degree, virtual world.

And this is a promotional video of the project:

After talking with them, we agreed to collaborate and build during 2018 the SciArt Lab Metaverse Branch as a district in Decentraland. They will provide the resources that we need to make this happen, so this opens a new period for the lab, in which patronage and partnerships will make our projects more sustainable.

We have recently initiated a partnership with Magg Architecture for the construction of our facility in the metaverse. More information about the evolution of the project will be published soon.

sciartlabHeadquarters

Digital creation for hackers & makers: prototyping musical tools from the SciArt Lab.

01 June 2017

Tags : programming, midi, music, code, coding, sciart, guitar, alda-tabs, english, open-source, software, ableton

Note
One of the main goals of the SciArt Lab is the open exploration of innovative ideas from a maker/hacker perspective, finding innovation through prototyping rather than relying on mere theoretical approaches. In that sense, we try to combine disruptive technologies and scientific knowledge with unconventional developments and real implementations/prototypes of our ideas. If you want to know more about the research of the SciArt Lab check our website.

What is this article about?

This article is an introduction to some of the projects which have been developed by the SciArt Lab around topics related with digital musical creation.

In this post I will summarize part of my hands-on experience based on the intersection of DIY electronics, MIDI controllers, and the development of new tools (coded in Java, Groovy, Processing, Javascript) in combination with existing software such as Ableton Live.

This is an on-going exploration, so follow us on Twitter and keep updated in the near future.

Music and digital creation

I can summarize the current projects of the SciArt Lab as a set of fun experiments.

Basically, we are hacking music with both sound synthesis, MIDI experiments, DIY electronics and algorithmic composition, combining experimental music with brand new technologies. Discovering how coding and music can be combined by prototyping domain-specific languages, enabling self-composed songs with genetic algorithms or re-discovering MIDI controllers to create audio art.

A. Genetic algorithms, mathematical compositions and generative music

We are exploring the potential of applying Artificial Intelligence techniques and software development to create programs which are able to write and play their own music.

Take a look of our first experiments, watching our videos of cellular automata with emergent properties for music composition.

Each cellular automaton is able to compose its own music based on simple rules, evolving while it plays synthetic instruments in Ableton Live or external devices through MIDI events.

B. Domain Specific Languages

Alda-tabs is the first Domain Specific Language for Guitar Players in the Java Virtual Machine. This piece of software can help guitar players to “execute” their music notes in the JVM, write songs and get audio feedback with basic tab syntax. You can read more about this in this article.

Take a look of the potential of Alda-tabs with chords and arpeggios listening this example (code also provided):

./alda-tabs.sh examples/01-guitartabs-example.alda

C. Digital instruments and physical interfaces

A couple of years ago, when I was working as Visiting Researcher at the Critical Making Lab (University of Toronto), I discovered how a humanistic-based approach to DIY electronics, coding and making could change forever my conception on research. That experience helped me to see the importance of hands-on learning and the role that tangible objects could have for theoretical or intellectual explorations.

Currently I am working on a prototype of a physical MIDI interface to control digital instruments directly from a guitar fret. This same concept will be explored with different objects and materials (conductive or not) in the following months.

The idea is to go beyond the keyboard as the standard digital music interface and build physical MIDI controllers with wood, cardboard, fabric, etc. More details about this project will be published soon.

In the meantime, I have been also testing some libraries of Javascript for sound synthesis, and playing around with p5.js to develop the foundations of SoundBox, an experimental digital environment for synthetic music creation. Basically, the idea with this tool is to transform a human voice or an instrument (through a microphone) in a MIDI interface. Right now, it basically detects the fundamental frequency of the microphone’s sound signal, allowing the user to transform a continuous signal in a set of discrete notes. It also parses that information and reproduces the played sequence in Sin Oscillator.

It is a very straightforward prototype with troubles with some harmonics, but it has been a good experiment to learn about how these issues work. Let’s see, but maybe SoundBox is the starting point for something more sophisticated.

img2

D. Music Visualization

One of the research interests of the SciArt Lab is information visualization in unusual ways. I always was fascinated about synesthesia and lately I have been testing visual approaches to music. The idea with some of the prototypes I have been working in is to map MIDI inputs both with physical visualizations (i.a. LEDs) and computational ones.

In this second category, I have been testing the possibility of creating my own 2D graphics with Processing and SVG and animate them while controling their movements and shapes directly from external MIDI inputs. This is one example of one program/animation that I have implemented recently:

In the previous example, an animation is created dynamically in Processing while the behavior of an animated cartoon responds to the inputs received from an external DIY electronics device. Both the graphics and the sound are produced by the orders received through MIDI inputs.

E. Postsynaptic Symphonies

I have always liked music. I started writing songs with a keyboard as a kid and continued with a guitar when I was a teenager. Nowadays, I enjoy playing several kinds of instruments. Besides the keyboard, my acoustic guitar and my wife’s classical guitar, I have two harmonicas, some flutes, an ocarina, an ukulele and a guitalele.

Recently, as part of the open-ended exploration of the SciArt Lab, I have been writing also some digital music. I call them postynaptic symphonies because I found interesting the cognitive experience of listening that sort of unpredictable songs.

I have published some postynaptic symphonies in SoundCloud:

More information about the evolution of my music-related projects will be coming soon :)