This is part two of my degree project; Connected Interactions
Seamlessly merging the physical and digital to create interactions with improved sensory richness.
This project is looking at the two main discourses of interaction-design; screen-based and object-based, and positions itself in the gap between the two. It questions whether or not it is possible to combine the two and create products that take advantage of both the flexibility of screens and the tactility of tangible interfaces.
The interaction-design paradigm of the moment is being centered around swiping our fingertips on glass, may it be phones, tablets, computers, in-car systems, or even refrigerators. If we are to believe companies like Google, Microsoft and Samsung the coming paradigm of interaction-design is even less tangible with the interaction centered around voice commands and waiving your hands in the air.
More tangible options exist that often offer more natural interactions with greater tolerances and error margins and that stimulate a wider range of human abilities than it’s screen counterpart. However, more often than not these tangible interfaces ends up with having very few functions and limited in terms of adaptation and customization.
By combining the physical with the intangible this project tries to tame the technology beast. Making technology, connectivity, functions, and actions more graspable and relatable without losing out of the function-rich screen-interfaces. Seamlessly integrating interactions with the digital world into mundane scenarios and making sure that technology becomes in tune with people and our environments, not the other way around.
This concept is attempting to make wireless connectivity and communication more graspable by mapping intangible actions, functions and settings to real-world tangible actions.
The observation that led to this concept comes from audio consumption and more specific the mismatch between where content usually resides today (in smartphones) and what is being used to amplify the playback (stereos and speakers)
In a typical audio-playback scenario a clash of interfaces occur. The smartphone that holds the content is connected either via a audio-cable or via wireless technology (usually Bluetooth) to an amplifier. More often than not you get double volume controls, double playback controls and so on. This together with the smartphones
already existing interface duality with some functions residing on the screen and some mapped to physical keys makes for a complex interface system.
Placing your phone on one of the three unmarked positions starts one of the three actions (Play, Pause and Radio). These
positions are only visible when powering-up the device and briefly as the action initiates. The reasoning behind not having any marks is that when an interface is this simple and is used by the same people all the time markings outlive their use very quickly and become nothing more than visual clutter.
Inspiration for the object and how it is positioned in the home came from the “key-bowl” – a place where you put your keys when you come home since you have no use for them within the home. Now this isn’t really true when it comes to your phone but with more and more devices that compete for your attention together with the fact that phones are getting larger and larger many people do unload their phone somewhere when they get home.