Qualia Logo Animations

Qualia is a ground-breaking new digital technology and research project that aims to revolutionize the way audience experiences are evaluated at arts and culture events. The project is funded by Digital R&D for the Arts, Qualia is a new way of collecting audience profile information, providing real-time evaluation and feedback, as well as measuring impact indicators at cultural events. 

i-DAT is responsible for developing the technology of this project together with Eric Jensen (University of Warwick)Nathan Gale (Intercity), Mutant Labs, and Elixel. This includes face and smile recognition of live camera feeds, sentiment analysis tool calculating mood from social media, a web engine used of data capture and processing, iPhone and Android apps for personal control, public information kiosks and interactive feedback points, and a digital art installation.

For this project I was responsible in developing a real time visualization in Processing for the main screen. The visualization is driven by real-time mood and emotion calculations captured from social media, and an organic form is animated based on these averaged conditions. Following is an image of the visualized logo.


This City's Centre

This City’s Centre is a digital performance that took place in Exeter Phoenix from 17th to 21st of September 2013 (Exeter, UK), and introduces new ways of creating and experiencing new media performances. For the demands of this work it was important to develop a technological framework that assists creative aspects of the storyline and enhances parts of the performance space that relate to the exposition of this artwork. A trans-disciplinary collage generates a surreal experience that offers an invitation to examine daily urban life in local and remote locations simultaneously. This City's Centre was made possible with the help of more than 30 people, and with main funding from AHRC. 


To accomplish the technological demands of this work, a media system was created in order to allow the transmission of multiple live video feeds from various geographic locations (around the city of Exeter) to the main performance space via a Wide Access Network (WAN). An application (client) was developed to capture live video from the connected camera on each computer, and to stream video information via a TCP/IP network protocol to the main server application – in the case of this performance, data were sent to the main computer in the venue. 


Six different captured video feeds are received in the main application that uses a matrix utility to mix the videos together according to the storyline. This can be achieved manually, automatically, or interactively. The main computer has direct access to all client computers, thus it is possible to remotely control every single parameter internally so that accurate and fast adjustment can be accomplished. The final composition is projected in the venue using multiscreen functions and projection mapping as well. 


Moreover, Android phones are used as streaming devices that transmit live video directly into the main application (using this approach). With the use of HTTP protocols it is possible to broadcast realtime captured video from the mobiles, and extract their frames inside the program so that they can be mixed in the composition. Finally, the visual result is captured and broadcasted online as a tele-presence piece that is accessible from every device that has a web browser. This is accomplished with the use of UpStage, an open platform for cyber-performances and web art pieces. In the UpStage web interace, a chat function is included so that visitors and participants can communicate during the performance and improvize by exchanging ideas. The final text, is re-projected in the physical space of the venue. Through this complex system, a performance emerges, a well-written stroyline about urban life that is transformed into a mesh of sounds, lights, videos, acting, and improvization, creating an unexpected fusion of dimensions using real and digital entities.

Graduate Teaching Association

This spring I completed the course for Graduate Teaching Associates, aimed primarily for researchers with an interest in teaching. The GTA course leads to accreditation by the Higher Education Academy (HEA), giving participants the opportunity to become Associate Fellows of the HEA.


ΑΩ Magazine Publication

ΑΩ  International Online Magazine is published by the Alexander S. Onassis Public Benefit Foundation scholars' association, and it provides news and activities of the foundation and its scholars. In issue 27, an article was written about my current research and art activities, and more specifically there was a detailed mention of one of my latest awards in Code Control Festival in Leicester, UK.

Reverb Unit

This is a Reverb Unit I developed in MaxMSP using a reverberation algorithm, and it is publicly available from SoniconLab. The patch is in .maxpat format, which means that you will need to install MaxMSP/Jitter in your computer to open the file. As you see in the following screenshot, there is a sampler that allows you to open an audio file (.wav, .aiff), store it in a buffer and reproduce it while it is being processed by the reverberation algorithm. This algorithm presents the fundamentals of applying a series of delays to create a reverberation effect. For more information on reverberation and DSP effects you may want to check first our tutorials: Sound in Open/Closed Spaces, and Audio Processors.


ReSynth: An FM Synthesizer

ReSynth is synthesizer that uses a Frequency Modulation technique to generate rich spectra of sounds. The unit is programmed in MaxMSP, and includes a lot of parameters for customization. It includes sound envelopes with window functions, harmonicity and modulation indeces, filters, delay, and chorus effects. ReSynth can be downloaded for free from SoniconLab - includes the source code of the file as well in MaxMSP.


SoniconLab Blog

In my new website (www.soniconlab.com), I have included a blog area that I upload regularly lectures, presentations, notes, code examples, and more. You may find a large collection of presentations (more than 500 slides of original material) from areas such as sound, acoustics, composition, music, video production, design, and programming. Also, there are code examples in MaxMSP, Processing, Arduino, Anrdoid, etc.

Here is a list of the topics included at the moment so that you get a clear idea of the blog content:

- Basic Sound Theory (presentation, code examples)
- Human Hearing (presentation)
- Binaural Hearing & Stereophony (presentation)
- Sound in Open/Closed Spaces (presentation)
- Audacity (tutorial)
- Audio Processors (presentation)
- Audio Compression (presentation)
- Cubase (tutorial)
- MIDI (presentation, tutorial)
- Sound in Cinema (presentation)
- Sound in Adobe Premiere (tutorial)
- Microphones (presentation)
- Post-Production Audio (presentation)
- Video Editing (presentation)
- Amplitude Modulation, Ring Modulation, Frequency Modulation, Additive Synthesis, Discrete Summation, Waveshaping (code examples in MaxMSP)
- MP3 Arduino (code example in Arduino)
- Mobile Streaming Video (code example in MaxMSP, Processing)


Poseidon's Pull

Poseidon's Pull is a collaborative art installation that intends to record, decode, and perceive possible hidden messages from the ancient Greek god of sea, Poseidon, and then provide a public interface that allows participants to virtually navigate in the collected data pool. The installation has been exhibited so far in the following venues:

- WSU Museum of Art (Pullman-Washington, USA)
- Ionion Center for Arts and Culture (Kefalonia, Greece)
- Gezira Art Centre (Cairo, Egypt)


The project involves a large development effort, as a number of international collaborators from around the world have to record GPS data in close-by sea areas. Each participant uses a floating vessel of preference (boat, ship, canoe, etc), and follows a semi-random sea route for a considerable amount of time. During this route, a GPS device records location data, and speed as well. The whole process is documented using video cameras. When the ritual of this process finishes, the video and the location data are implemented into the main system.


In the installation space the user uses a ship rope to pull and activate the process of exploration of all the recorded activity from the participants. With every participant interaction, a new route is revealed on the screen, showing the route of the current selection as well as the information that was captured. A second interface was also developed, using a bowl of water (as you may see in the picture below). The participants had to put their hands inside and disturb the water, causing the computer vision system to control the navigation of the system. This interaction was chosen to be implemented as it made a clear reference to the "Water", which is the most important element in this installation, and also it relates to the Power of Poseidon...able to control ships only by one touch.


If you like this project and you want to learn more, you can visit the project's official website here: http://www.poseidonspull.com/

The Cloud: An Interactive Chandelier

This installation is part of an ongoing project. In the middle of a private office in Athens there is a "cloud formation" of circular panels designed by mabarchitects. Inside this minimal chandelier, there are multiple RGB LED strips arranged by panel level to have different lighting controls. There is an Arduino board communicating via network with different control devices - such as mobile phones, computers, and tablets. By using custom software lighting automation is created based on various conditions, such as:

- Manual control - separate level and RGB channel (color and light intensity)
- Mood light - automatic smooth transitions through the color spectrum
- Use keywords (such as 'romantic', 'cool', 'relaxing', and so on) to extract automatically color themes from online design patterns
- Use environmental conditions (temperature, humidity, wind speed) from all over the world to control the chandelier
- Semantic analysis via Twitter (identify mood to project results through light).





The HYBRID CITY II: Subtle rEvolutions


The HYBRID CITY II: Subtle rEvolutions is an international biennale event dedicated to exploring the emergent technological character of the city. The event consists of a conference, workshops, and an online exhibition. The following project is part of my research and it is exhibited during May 2013.

In this project, computational media and sensor technologies are used to measure, analyze, and control aspects of the domestic environment. Reading the measurable world from macro to micro, a large number of possibilities may create unexpected, flexible, and personalized spaces that enhance living qualities of inhabitants, providing added layers of information, affectivity, and aesthetics with the use of calm technologies and ubiquitous computing. Fundamental consideration in this case is to construct sensate spaces that may establish the domestication of computational media with prior interest to elevate aspects of the inhabitants’ well-being, such as mood, emotion, experience, and perception.

Environmental conditions, spatial information, circulation, virtual and physical navigation, social media, or biosensors can collectively define quantitative or qualitative information that is used to properly adjust and personalize each environment and closely match taste and preferences. With the development of middleware applications it becomes even more feasible to approach this goal, providing necessary tools to create links between incoming data and outgoing processes, establish important automations, or suggest new creative and imaginative interactions. Therefore, it is possible to instantly create connections between an isolated sensor reading and projected visualizations, or use a number of similar sensors to control the overall interior lighting. Extracting specific keywords from social media messages or using sentiment analysis methods to define mood and emotion, it becomes possible to directly configure properties of a personal space as a multi-layered canvas. The final result of the configured space can provide a single pixel in the larger screen of the Hybrid City so as the overall well-being may be mirrored, provoke self-consciousness, and define a cartography of lifestyles and living conditions.

 



Art-Athina 2013 Installation

Art-Athina is one of the biggest art shows in Greece, with hundreds of participants around the world. This year I collaborated with Katerina Karoussos from the i-Node of Planetary Collegium, to create a telematic art piece with the use of a custom interactive media system. The system uses the video image that is captured in real-time from the camera that is connected to the remote computer, encodes the video file to a suitable format for network transmission, and streams audio and video to the destination in the gallery space. A number of remote locations - from Crete, to Jerusalem, and Paris - are streamed in the final realtime canvas that creates interesting narratives through real and virtual spaces.



Sensorama Workshop

Sensorama Workshop in GUC (German University Cairo), 1st & 2nd April 2013, together with Mike Phillips and Ziad Ewais, iDAT, Plymouth University.

The workshop consisted of the following modules:
- Sensors and the measurable world
- Sensor technology and Interactive Art
- Introduction to Arduino
- Using the Arduino Starter Kit (buttons, switches, LEDs, photo-resistors, potentiometers, etc)
- Arduino communication with Processing and MaxMSP
- Visualizing physical data
- Android SDK and mobile sensors (accelerometer, compass)
- Extracting brainwave frequencies with Mindwave
- Computer vision (blob detection, face recognition)


* Example code used in this workshop will be available to download soon.

The Source: DI-EGY Festival Exhibition, Cairo, 27th - 10th of April, 2013

The Source is an interactive piece co-created with Mike Phillips and Ziad Ewais, iDAT, Plymouth University for the DI-EGY Festival Exhibition, taken place in Gezira Art Centre, Cairo, Egypt, from 27th of March until 10th of April 2013.


This installation becomes a virtual space of a sonic and visual composition that is created in real time based on information captured from various sources around the world. XML and other data feeds from environmental sources are used by the system to define the score according to random events such as solar wind speed, earthquakes, or average temperature. The composition consists of layers/channels that exist from macro to micro structures, and manifest as granular noises, high-pitched frequencies, or low-bass resonances for the audio domain, and as spectrograms, streaming data text, or colorized planes for the visual domain. Moreover, streaming audio is received from various sources around the globe, or 3D objects morphed and animated according to specific events. A computer vision system is implemented within the space to track visitors' position and movement as this is the interaction method to interpolate between the different composition layers. When in stasis, the composition returns to a static and almost inactive position. 

Objects of Affect: The Domestication of Ubiquity

Paper presentation in: CR13 International Research Conference on Digital Arts in the Series of Consciousness Reframed: Art and Consciousness in the Post-Biological Era, DI-EGY Festival, Cairo, Egypt, 2013. The presentation took place in the GUC (German University of Cairo) in the 27th of March 2013, together with Mike Phillips, iDAT, University of Plymouth. Here follows some parts of the publication / presentation.

"This paper contextualizes digital practices within architectural spaces, and explores the opportunities of experiencing and perceiving domestic environments with the use of media and computing technologies. It suggests methods for the design of reflexive and intimate interiors that provide informational, communicational, affective, emotional, and supportive properties according to embedded sensorial interfaces and processing systems. To properly investigate these concepts, a fundamental criterion is magnified and dissected: dwelling, as an important ingredient in this relationship entails the magical power to merge physical environment with the psyche of inhabitants. For this reason, a number of views are presented and discussed, providing necessary conditions to include matters of affectivity, ubiquity, and layering complexity of interior space. Moreover, specific processes of the possibilities of the digital are mentioned, and examples are presented of the infusion and diffusion of ubiquitous computing technologies within domestic spaces. To briefly conclude, this article is an attempt to discuss the relationship of human-architecture-computer symbiosis and the design process of creative and innovative spaces that affect states of memory, perception, experience, as well as mood and emotion."


Pulse Mode: An Interactive Participatory Audiovisual Installation

Pulse Mode is a commissioned artwork developed for the Code Control Festival, exhibited in Phoenix / Leicester Arts Centre during 22-24 March 2013 (Leicester UK). A fundamental requirement for this award was to use MaxMSP/Jitter for the development of an original piece that exhibits unique properties and suggests new possibilities for computational interaction and control.

Pulse Mode allows participants to interact in real time with the audiovisual mix and trigger events that represent social engagement and mirror aesthetic preferences of the collaborative interaction. The installation space consists of a fragmented 3D screen of 7 irregular rectangular shapes, an info screen (tablet), a Mindwave sensor, and a second tablet for the Mindwave settings. Mobile devices of the participants can be used to select preferences and adjust properties of the system in real time using a web interface and a server that communicates with the processing units. An Ableton Live set includes all the audio tracks that are controlled by Max For Live devices according to the incoming information. The MaxMSP/Jitter visual processing unit controls the composition and the projection mapping, and a Processing sketch reads and controls the Mindwave interface.


An important consideration for the system interaction was to allow every participant to control aspects of the performance. For that reason an HTML5 webpage was created so that everyone with a mobile device and a web browser to be able to select according to their personal taste and preferences various audiovisual properties. Node.js was used as the main server to communicate with the system using OpenSoundControl

 

A QR-code sticker on the installation space immediately directs participants to the web address, where they may find an interface that allows them to select music genre, audio and visual effects, as well as write and send text messages. All the genre selections are received and collectively analyzed giving a ranking of preferences. Just before the current track finishes, the system looks for the genre with the most votes and randomly selects a music track (from that genre) for the next selection. When the new track has been selected, the previous votes are erased, and the voting starts again. This feature provides a method of arranging the macro-montage of the music tracks according to the user preferences. Therefore, the users directly set the style of music in real-time, and following that define the visual imagery that is composed to match the audio quality (speed, noise, texture).


For this installation 300 music tracks have been imported, looped, and wrapped/syncopated in Ableton Live. The tracks have been organized and categorized according to their music style. Each channel in Ableton includes all the tracks from a specific genre. Tempo values vary from 70 BPM to 170 BPM (Beats Per Minute), so when a style changes the main Tempo is set to the average genre speed automatically.


The control of the Live set is defined according to the Max For Live devices that were developed in MaxMSP. The first device collects the genre selections and sets the new track based on the voting system. Moreover, it saves the overall votes (from the start of the session) in to a text file that can be further analyzed using data visualization techniques, providing a useful insight to the participants' taste and preferences. The device also sets the On/Off state of the audio effects (selections are instantly triggered by the users from the web interface). Finally, the device collects track information (genre, name, duration, remaining time), and streams it over the network to the main info screen (tablet), so that users have a visual feedback. A second device in M4L includes the audio effects that are triggered from the users (Shuffler, Modulation Delay, Flanger, Filter, Panning, and Reverb). The effects closely resemble those found in professional DJ mixers and DVD/CD players (i.e. Pioneer CDJ series).


Following that, the M4L devices trigger events in the visual composition that uses a MaxMSP/Jitter patch (loosely based on VJ Mode application developed by SoniconLab) to create a complex visual landscape based on various properties of the music mix. One thousand (1,000) visual clips are included in the system, and each one is selected based on various musical properties, creating a unique composition at every instance. The visual effects are also triggered by the users in real time (with the use of the web interface), and the final result is projected and mapped on the 7 planes in the installation space. The projection mapping is configured within the same max patch. Finally the visual mix uses scrolling text (of various size and speed) based on the messages that are sent from the participants, creating an unexpected real time graffiti.


Pulse Mode raises issues of authority and control, providing tools that open-source the control of an audiovisual performance, allowing the participants to define the environment and gain direct access to its functions. However, authority may return with the use of a Mindwave sensor that is able to by-pass user selections according to the brain activity of the participant (and more accurately the attention level). Thus, the brain activity of the participant competes with the other users and if the activity reaches higher levels, it becomes easier to dictate preferences over the rest of the public.


Therefore, Pulse Mode presents a complex audiovisual system that creates multiple automations and interactions with the use of diverse protocols, programs, and platforms. Nevertheless, it is compatible and fully accessible from a large number of participants allowing to properly define and adjust the overall outcome easily, playfully, and efficiently.

Credits:

Hardware Support: iDAT, Plymouth University
Interior Design: Kallia Platirrahou
Web/Server Design: Florian Bruckner
Graphic Design: Andrew Leeke
Special Thanks: Sean CarrollChris Tyrer

Catalyst Award for Code-Control Festival

Commissioned a Catalyst Award to develop original interactive artwork for Code Control Festival exhibition in Leicester Arts Centre (22nd till 24th March 2013). The installation will consist of realtime audiovisual interactions, mobile and web interfaces, projection mapping, as well as Mindwave sensors used from the audience. More information about the system will be posted here soon.


Paper Publication in DI-EGY Conference, Cairo

Accepted paper publication for the DI-EGY Conference in Cairo, Egypt: International conference on digital arts within the international research conference in the series Consciousness Reframed: Art and Consciousness in the Post-Biological Era, in cooperation with Planetary Collegium, Plymouth University. 

Title of paper: "Objects of Affect: The Domestication of Ubiquity".

Abstract: " [...] Recent technological trends anticipate the embedding of small sensorial interfaces with low-intelligence and easy complexity into every space we use, occupy, and dwell. As technology becomes more ubiquitous and ambient, silent manifestations of computational intelligence appear to surround domestic spaces, and slowly they become important aspects of an inhabitant's lifestyle. [...] "


Sensors R&D

Part of my current research in i-DAT is to examine, prototype, implement, and test under normal daily activities a variety of sensor technologies that are able to identify a number of (physical/virtual) properties such as location, personalization, biological information, movement in space, environmental conditions, and also identify mood and emotion of inhabitants in domestic and interior spaces.

The physical sensor interfaces I am using at the moment include: GPS sensor/shield, RFID tags/reader, Neurosky Mindwave, Kinect, and environmental sensors (temperature, humidity, rain, wind, sunlight, sound, radiation, C0, C02, methane, propane). The combination of these sensors are going to provide necessary information for the system to evaluate, analyze, and personalize aspects of the inhabitant's environment.


For example, I am developing a system for wireless control of interactive lighting in domestic/interior environments that is going to make use of Arduino boards, WiFly and DMX shields. Based on the information that is distributed on the different zones of the space (i.e. environmental conditions - temperature, sunlight, etc) the light composition is going to be altered and adjusted to current information. Aesthetics and practicality are also a matter of importance, and are studied as well to make spaces that adapt to inhabitants' preferences. Following, there are some images from the 3D realtime simulation of the actual development in Unity.


The information that is received from the sensors can be spatially distributed in different media, such as lighting (as the above example), sound, music, visuals, photographs, or even smells. In order to create associations through input and output, I developed an Android application that directly adjusts these connections in real-time. The interface also includes a number of functions for immediate personalization of the media in the interior space. Further experiments are going to make use of touchscreen surfaces (as this one), gesture-based interactions with Kinect/Leap, and semantic analysis from social media activity of the user.



Plinthos Pavilion in the 7th Biennale, Athens.

Plithos Pavilion is a project I created with mabarchitects for the Interior Design Show 2010 in Metamorphosis, Athens. For this project I was responsible to develop a reactive space using sonic landscapes and light ambiences. A real-time composition engine was designed to read visitors' actions and create an atmosphere that influences experience and perception.

This year, this project is exhibited in the 7th Biennale of Young Architects in Benaki Museum. 
Dates: 21 November 2012 - 13 January 2013


FullDome UK Festival, 2012

Participation in lectures, talks, presentations, and viewings in the two-day FullDome Festival in National Space Centre in Leicester, UK. Some of the viewings include: Earthquake (Cal Academy, We are Aliens (NSC Creative), Super Volcanoes (Spitz), Robot Explorers (E&S), Escher's Universe (Parque de las Ciencias & El Exillio), Robin Sip (Mirage 3d), Ancient Skies (Sky-skan), Matrix Optimizer, TBC, Life of Trees (Softmachine), United VJs Real-Time Performance, The Search Engine (DJ Food).