My Emotive Bot
These last six weeks I tried to give my hacked robot behaviour and make it more alive for the person to interact with it easily. My aim was to investigate the possibility of replacing behaviour based language instead of verbal language among human and his personalised robot.
Similar WorksÂ
One of my favourite related works was the AUR desk lamp by Guy Hoffman (2007). This robot had the feature of Pixar desk lamp and it was able to predict where the person is going to look at. The core features of this project which made it successful in my perspective were that it maintained the relationship between human and robot based on behavioural language, the robot retained its objectness and it was able to become a sufficient companion to the human in workspaces.
Bio-Sensing
I tried to manipulate the behaviours of my personalised robot by biometrics. There are several pieces of research and case studies about robots and biosensing and the possibilities of it in art, installations, performances, therapy and et cetera.
In this state, I tried to measure finger pressure, heart rate and skin moisture. I simulated both emotive and mobile behaviours of the robot.
Last Thursday, in project fair, I received interesting and unexpecting comments on my robot. To sum up, most of the crits told me to have a solid, simple idea about this personalised robot. Why I want to hack human body through sensors and what is the result of this biometrics. what are the core features of my personalized robot and to what extends it is going to affect human life?
for further steps, I want to focus on biohacking and mostly the output of the biohacking process, the resulting behaviour of biometrics and the behaviour of my personalised robot. I will make mock situations of my robot in an installation, home, workspace and et cetera in order to find the best possible solution for its context. In addition, I want to learn about existing RL algorithms (Bayesian RL) which are out there to simulate the mental state of humans.
Submit a Comment