SYNCAM :: Relearn 2014
This worksession proposes to experiment − and document − the construction of DIY tools to record and display audio-visuals in order to open, as much as possible, a comprehension of the multimedia workflow in a creative way.
Considering that a tool has an influence on its usage and vice-versa, what happens if you stick to this cause-and-effects process by building our tool ?
Device can be build around a raspberry pi (micro computer), that will be augmented by autonomous modules that act as synaesthetic functions communicating with each other in real time. As the display also influence the way of filming, a whole dispositif to record and display should be considered. The outcomes being forseen as experimental, the worksession will also be about feeding the imagination about what a camera could be, while documenting the discoveries and/or hypothesis.
http://f-lat.org/kiwi/doku.php/fr:edu:relearn:syncam::start
Useful commands
you can list available usb devices with
lsusb
record audio input
arecord -f cd audio.wav
test your speakers
speaker-test
set volume
alsamixer
playback audio file
aplay -f cd file.wav
to see the activity monitor of the cpu
top
Gstreamer installation on the π
installation
to install gstreamer-0.10
sudo apt-get install gstreamer-tools gstreamer0.10-plugins-bad gstreamer0.10-plugins-ugly gstreamer0.10-plugins-good gstreamer-ffmpeg gstreamer0.10-alsa
to install gstreamer 1.0
edit the file /etc/apt/sources.list
- sudo nano /etc/apt/sources.list
and add the following in a new line
then do
- sudo apt-get install libgstreamer1.0-0 libgstreamer1.0-0-dbg libgstreamer1.0-dev liborc-0.4-0 liborc-0.4-0-dbg liborc-0.4-dev liborc-0.4-doc gir1.2-gst-plugins-base-1.0 gir1.2-gstreamer-1.0 gstreamer1.0-alsa gstreamer1.0-doc gstreamer1.0-omx gstreamer1.0-plugins-bad gstreamer1.0-plugins-bad-dbg gstreamer1.0-plugins-bad-doc gstreamer1.0-plugins-base gstreamer1.0-plugins-base-apps gstreamer1.0-plugins-base-dbg gstreamer1.0-plugins-base-doc gstreamer1.0-plugins-good gstreamer1.0-plugins-good-dbg gstreamer1.0-plugins-good-doc gstreamer1.0-plugins-ugly gstreamer1.0-plugins-ugly-dbg gstreamer1.0-plugins-ugly-doc gstreamer1.0-pulseaudio gstreamer1.0-tools gstreamer1.0-x libgstreamer-plugins-bad1.0-0 libgstreamer-plugins-bad1.0-dev libgstreamer-plugins-base1.0-0 libgstreamer-plugins-base1.0-dev
look at your webcam using gstreamer
gst-launch-0.10 v4l2src device=/dev/video0 ! ffmpegcolorspace ! xvimagesink
grab a picture from your webcam using gstreamer
gst-launch-0.10 v4l2src ! ffmpegcolorspace ! pngenc ! filesink location=image.png
FFmpeg
to install simply do
sudo apt-get install ffmpeg
or
how-to compile ffmpeg for the rapsberry pi
https://trac.ffmpeg.org/wiki/CompilationGuide/RaspberryPi
simple audio capture with ffmpeg
ffmpeg -f alsa -i default audio.wav
simple video capture with ffmpeg
ffmpeg -f video4linux2 -s 320x240 -i /dev/video0 video.ogv
Video4linux2Loopback
sudo apt-get install v4l2loopback-dkms
modprobe v4l2loopback
STREAM
stream audio with gstreamer on an icecast server
gst-launch-0.10 pulsesrc device=alsa_output.pci-0000_00_1b.0.analog-stereo.monitor ! audioconvert ! opusenc ! oggmux ! shout2send ip=192.168.100.114 port=8000 password=syncam mount=syncam01.ogg
stream video vers Icecast with the Rapicam
(marche pas )
raspivid -t 0 -fps 15 -w 640 -b 750000 -h 480 -o - | gst-launch-1.0 -v fdsrc ! h264parse ! queue ! videoconvert ! theoraenc quality=16 ! queue ! oggmux name=mux alsasrc ! audio/x-raw-int,rate=8000,channels=1,depth=8 ! queue ! audioconvert ! vorbisenc ! queue ! mux. mux. ! queue ! shout2send ip=192.168.100.114 port=8000 password=syncam mount=/whatever.ogg
Projects
Stéphane (Noël)
1) capture an image from the webcam, store it with a unique name (based on date ?)
for 1 jpg image
ffmpeg -f video4linux2 -i /dev/video0 -vframes 1 capture_$(date +'%d%m%y').jpg
2) launch capture and download from an another computer in ssh
3) transform an image in ascii and send it through irc
4) from an irc message, launch capture + ascii transform + send line by line through irc
Capture an image and transfer it from pi to mac
What i have done
The idea is to launch a single command from the mac to the raspberry pi that capture an image from the webcam, and transfer this image to the mac
Hardware : mac book pro - connected through ethernet to - raspberry pi with usb camera
On Raspberry pi
installed raspbian
hostname achille (username still pi)
enabled ssh
install : ffmpeg, guvcview, libnss-mdns (en utilisant la commande sudo apt-get install <nom du programme>
connected the raspeberri with ethernet, share internet in the system prefs from airport to all via ethernet
wrote a file commande.sh in /home/pi
// code de commande.sh
#!/bin/bash
echo "je capture une image depuis la webcam"
nom_fichier=capture_$(date +'%d%m%y%H%M%S').jpg
echo "Son nom est $nom_fichier"
ffmpeg -f video4linux2 -i /dev/video0 -vframes 1 $nom_fichier
echo "je tente d'envoyer l'image sur l'ordinateur de stéphane dans le dossier 'documents'"
scp $nom_fichier stephane@MacBook-Pro-de-stephane-noel.local:Documents
// end code
! attention ! stephane@MacBook-Pro-de-stephane-noel is my machine. ":Documents" is the path to the place where you want to store the image
! attention ! no space between the user and the path as you can see in the code above
about scp : http://www.hypexr.org/linux_scp_help.php
make this script executable by Pi
// terminal code
chmod +x commande.sh
// end terminal code
at this point, the code can be execute on the pi
// terminal code
./commande.sh
// end code
On the Mac
activate in system prefs "distant session" (activer session à distance) for all user
open terminal and call pi by ssh
// terminal code
ssh pi@achille.local
//end code
enter the pi password
call the script
// terminal code
./commande.sh
// end code
(passwords required in both way)
Alternative to this solution on pi website : http://www.raspberrypi.org/learning/webcam-timelapse-setup/
Convert an image from the webcam into ascii message
add jp2a to rapsberry
the library need a jpeg file in input, and output a text file
// code
jp2a --width=60 file.jpg --output textfile.txt
// end code
Uploader l'image vers un serveur web par ftp
Ajouter ftp avec apt-get ftp et rebooter !
the raspberry have to be connecter to the net, and you need a ftp connection
Two files on the raspberry : poetrybot.py et upload2.sh
a folder online with a phpfile (and css)
Ne pas oublier de changer le chmod pour exécution du code
// le code en bash :
// code of upload2.sh
#!/bin/sh
ORIGIN=tempoetry
DEST=/www/irc_poetry/uploads/
#your ftp access here
HOST=XXXXX
LOGIN=XXXXXX
PASSWORD=XXXXX
PORT=21
#se placer dans le dossier de départ local, ici tempoetry
cd $ORIGIN
DOSSIER=$(ls)
echo "j'ai trouvé le dossier $DOSSIER dans $ORIGIN"
cd $DOSSIER
echo "il contient les textes "
echo *.txt
echo "Il contient les images"
echo *.jpg
echo "Je crée un gif animé avec les images, ok ?"
convert -delay 40 *jpg animation.gif
echo "c'est fait."
echo "je vais placer ce dossier en ftp sur $DEST$DOSSIER"
ftp -n -v $HOST $PORT << END_SCRIPT
bin
quote USER $LOGIN
quote PASS $PASSWORD
cd $DEST
mkdir $DOSSIER
cd $DOSSIER
put capture001.jpg
put texte.txt
put animation.gif
bye
END_SCRIPT
sleep 12s
echo "Fini. Je nettoie le dossier $ORIGIN"
cd ..
mv $DOSSIER ../out_tempoetry
// end of bash code
Then the code of poetrybot.py
// code of poetrybot.py
#!/usr/bin/env python
# coding= utf-8
import socket
import os
import time
class IRCClient:
socket = None
connected = False
nickname = 'poetbot'
channels = ['#2084', '#syncam']
statut='connecting'
mots=['coffee','rain','love','sun','tea','milk','right','patience', 'feeling','cloud','fix']
record=[]
compteur=0;
dossier_general='tempoetry'
dossier = ''
base_url= 'http.//www.codedrops.net/irc_poetry/?id='
def __init__(self):
self.socket = socket.socket()
self.socket.connect(('192.168.1.222', 6667))
self.send("NICK %s" % self.nickname)
self.send("USER %(nick)s %(nick)s %(nick)s :%(nick)s" % {'nick':self.nickname})
while True:
buf = self.socket.recv(4096)
lines = buf.split("\n")
for data in lines:
data = str(data).strip()
if data == '':
continue
print "I<", data
if self.statut == 'idle':
# regarder si un mot fait le trigger
for mot in self.mots:
if data.find(mot) != -1:
self.say("ho, i am interested in "+mot+". Go on",self.channels[1])
self.say("I take a picture, ok ?",self.channels[1])
self.statut='record'
self.compteur=0
# capture de l'image
self.dossier=time.strftime('poem_%d%b%Y%H%M%S')
os.makedirs(self.dossier_general+"/" + self.dossier)
os.system('ffmpeg -f video4linux2 -i /dev/video0 -vframes 5 '+self.dossier_general+"/" + self.dossier+"/capture%3d.jpg")
if self.statut == 'record':
message=data.split(':')
if len(message) > 2:
auteur= message[1].split('!')[0]
message[0]=''
message[1]=''
message = ' '.join(message)
corps= message
self.record.append(corps)
self.compteur+=1
if self.compteur > 10:
self.statut='write'
if self.statut == 'write':
#ecrit
file = open(self.dossier_general+"/"+self.dossier+"/texte.txt", "w")
for lignes in self.record:
file.write(lignes+"\r\n")
file.close()
time.sleep(2)
self.record=[]
self.statut='envoie'
self.say("i had enough, thanks",self.channels[1])
if self.statut == 'envoie':
#commande d'upload
os.system('./upload2.sh')
self.statut='idle'
adresse=base_url+self.dossier
for reso in self.channels:
self.say("you have write a nice illutrated poem with me",reso)
self.say(adresse,reso)
# server ping/pong?
if data.find('PING') != -1:
n = data.split(':')[1]
self.send('PONG :' + n)
if self.connected == False:
self.perform()
self.connected = True
args = data.split(None, 3)
if len(args) != 4:
continue
ctx = {}
ctx['sender'] = args[0][1:]
ctx['type'] = args[1]
ctx['target'] = args[2]
ctx['msg'] = args[3][1:]
# whom to reply?
target = ctx['target']
if ctx['target'] == self.nickname:
target = ctx['sender'].split("!")[0]
# some basic commands
if ctx['msg'] == '!help':
self.say('available commands: !help', target)
# directed to the bot?
if ctx['type'] == 'PRIVMSG' and (ctx['msg'].lower()[0:len(self.nickname)] == self.nickname.lower() or ctx['target'] == self.nickname):
# something is speaking to the bot
query = ctx['msg']
if ctx['target'] != self.nickname:
query = query[len(self.nickname):]
query = query.lstrip(':,;. ')
# do something intelligent here, like query a chatterbot
#print 'someone spoke to us: ', query
#self.say('alright :|', target)
def send(self, msg):
print "I>",msg
self.socket.send(msg+"\r\n")
def say(self, msg, to):
self.send("PRIVMSG %s :%s" % (to, msg))
def perform(self):
#self.send("PRIVMSG R : Register <>")
self.send("PRIVMSG R : Login <>")
self.send("MODE %s +x" % self.nickname)
for c in self.channels:
self.send("JOIN %s" % c)
# say hello to every channel
self.say('I come to hear you and create something out of it, fellows', c)
self.statut='idle'
IRCClient()
// end of code
On the server, a folder "irc_poetry" containing a subfolder "uploads". This is important
// code of php file index.php
<!doctype html>
<html lang="fr">
<head>
<meta charset="UTF-8">
<title>IRC poetry for relearn</title>
<link rel="stylesheet" href="sinkin/stylesheet.css">
<link rel="stylesheet" href="style.css">
</head>
<body>
<div id="wrapper">
<header>
<h1>IRC poetry for relearn</h1>
<nav>
</nav>
</header>
<article>
<?php
$dossier_source="uploads/";
if(isset($_GET['id'])){
$id=stripslashes(strip_tags(trim($_GET["id"])));
$illu="";
$titre="";
$texte="";
$liste=glob($dossier_source.$id."/*.gif",GLOB_BRACE);
if(!$liste){
$liste=glob($dossier_source.$id."/*.jpg",GLOB_BRACE);
}
if($liste){
foreach ($liste as $filename) {
$illu.="<div class='illu'><img src='".$filename."' alt='what i see'></div>\n";
}
}
if(is_file($dossier_source.$id.'/texte.txt')){
if(!$fichier = fopen($dossier_source.$id.'/texte.txt', "r")) {
echo "<div style='color:red'>lecture du texte impossible</div>";
} else {
$titre=fgets($fichier, 4096);
// Tant quon est pas à la fin
while (!feof($fichier)) {
$buffer = fgets($fichier, 4096);
$texte .=$buffer."<br >";
}
}
}
echo "<h2>".$titre."</h2>";
echo $illu;
echo "<div class='texte'>".$texte."</div>";
} else {
echo "<h3>Oups ! Rien à faire</3>";
}
?>
</article>
<aside>
<?php
// liste des dossier présents
$liste=glob($dossier_source."*",GLOB_ONLYDIR);
if($liste){
echo "<nav><ul>";
foreach ($liste as $filename) {
$nom=str_replace($dossier_source,"",$filename);
echo "<li><a href='?id=".$nom."' title='what i see'>".$nom."</a></li>\n";
}
echo "</ul></nav>";
}
?>
</aside>
<footer>
2014 - Relearn & Stéphane Noël
</footer>
</div> <!-- /wrapper -->
</body>
</html>
// end of code
You'll need a css file. I added a font css, personnaly
Pierre
1. Access remotely to raspicam
connect through ssh pi@192.168.100.XXX
sudo apt-get install libnss-mdns
exit
ssh pi@newname.local
install ffmpeg and gstreamer (see above)
2. First capture! yes!
raspistill -t 2000 -o image.jpg
copy it to local computer over network
scp image.jpg pierreh@192.168.100.105:~/Bureau/relearn/image.jpg
to use a graphical interface on the raspberry
exit from ssh and ssh -X pi@raspraster.local
midori
then open the file with url bar
3. Run camera forever, taking a picture when Enter is pressed
raspistill -t 0 -k -w 640 -h 480 -o pics%02d.jpg
4. 20 pixels height seems the minimum for raspivid, so imagemagick
convert pics01.jpg -resize 640x1\! pics011.jpg
5. chain
raspistill -t 0 -k -w 640 -h 480 -o pic.png –e png; convert pic.png -resize 640x1\! pic1.png
en cherchant les trames gif d'imagemagick
http://www.imagemagick.org/Usage/quantize/#diy_symbols
tramage avec des symboles
python bot to capture
----
import socket
import os
class IRCClient:
socket = None
connected = False
nickname = 'OlBot'
channels = ['#2084', '#syncam']
asciiart = []
def __init__(self):
self.socket = socket.socket()
self.socket.connect(('192.168.1.222', 6667))
self.send("NICK %s" % self.nickname)
self.send("USER %(nick)s %(nick)s %(nick)s :%(nick)s" % {'nick':self.nickname})
while True:
buf = self.socket.recv(4096)
lines = buf.split("\n")
for data in lines:
data = str(data).strip()
if data == '':
continue
print "I<", data
if data.find('capture') != -1:
asciiart = os.system('raspistill -t 1 -w 320 -h 240 -o - | jp2a --height=16 - | tee ascii.txt')
file = open('ascii.txt', 'r')
for i in range(1,16):
self.say(file.readline(), '#2084')
sleep(0.01)
# server ping/pong?
if data.find('PING') != -1:
n = data.split(':')[1]
self.send('PONG :' + n)
if self.connected == False:
self.perform()
self.connected = True
args = data.split(None, 3)
if len(args) != 4:
continue
ctx = {}
ctx['sender'] = args[0][1:]
ctx['type'] = args[1]
ctx['target'] = args[2]
ctx['msg'] = args[3][1:]
# whom to reply?
target = ctx['target']
if ctx['target'] == self.nickname:
target = ctx['sender'].split("!")[0]
# some basic commands
if ctx['msg'] == '!help':
self.say('available commands: !help', target)
# directed to the bot?
if ctx['type'] == 'PRIVMSG' and (ctx['msg'].lower()[0:len(self.nickname)] == self.nickname.lower() or ctx['target'] == self.nickname):
# something is speaking to the bot
query = ctx['msg']
if ctx['target'] != self.nickname:
query = query[len(self.nickname):]
query = query.lstrip(':,;. ')
# do something intelligent here, like query a chatterbot
#print 'someone spoke to us: ', query
#self.say('alright :|', target)
def send(self, msg):
print "I>",msg
self.socket.send(msg+"\r\n")
def say(self, msg, to):
self.send("PRIVMSG %s :%s" % (to, msg))
def perform(self):
#self.send("PRIVMSG R : Register <>")
self.send("PRIVMSG R : Login <>")
self.send("MODE %s +x" % self.nickname)
for c in self.channels:
self.send("JOIN %s" % c)
# say hello to every channel
self.say('hello world!', c)
IRCClient()
----
KKKKKKKKKKKKK0000OOOOkkkkkkkkkxxddddooodxxx
KKKKKKKKKKKKK000OOOOOOOOkOOOkkkkxkkkxxdxxkk
KKKKKKKKKKKKK000OOOOOOOOkOOOkkkkxkkkxxdxxkk KKKKKKKKKKKKK0000000000OOOOOOOOOOOOOOOxkOOO
KKKKKKKKKKKKK0000000000OOOOOOOOOOOOOOOxkOOO KKKXXXXXKKKKKKKKK0000000OOOOOO00000000OkO00
KKKXXXXXKKKKKKKKK0000000OOOOOO00000000OkO00 KKKKXXXXXXXKKKKKKKK000000000000000K0000OO00
KKKKXXXXXXXKKKKKKKK000000000000000K0000OO00 KKKKKKKKKKKKKKK00000000000000O0000OKKK00O00
KKKKKKKKKKKKKKK00000000000000O0000OKKK00O00 KKKKKKKKKKKK000000000OOOOOOO00OO00O000OOOO0
KKKKKKKKKKKK000000000OOOOOOO00OO00O000OOOO0 KKKKKKKKKK0000000000000OO00O000O000000O0O00
KKKKKKKKKK0000000000000OO00O000O000000O0O00 KKKKXXKKK00000000000OOO00000OO0000000000000
KKKKXXKKK00000000000OOO00000OO0000000000000 NNNNNNXXXKK000000000OOOOOO0K0000000K0O00000
NNNNNNXXXKK000000000OOOOOO0K0000000K0O00000 WWWWWNNNNNK000000000OOO00000KKKKKKKKKO00000
WWWWWNNNNNK000000000OOO00000KKKKKKKKKO00000 NNNNNNNNNK000000000000KK000KKKKKKKKKKO0KKK0
NNNNNNNNNK000000000000KK000KKKKKKKKKKO0KKK0 NNNNNNNNK000000000000KNXXXXKKKKKKKKXXO0KKKK
NNNNNNNNK000000000000KNXXXXKKKKKKKKXXO0KKKK XXXXXNNK0000000000000XNNNXXNXXXXXXXXXKk0KKK
XXXXXNNK0000000000000XNNNXXNXXXXXXXXXKk0KKK KKKKNXK00000000000000KXXX00XK0KNNNNXXKKkOKK
KKKKNXK00000000000000KXXX00XK0KNNNNXXKKkOKK 00KXXK0000K0000000000KXNNXKKK0KXXKXNNXXKkxK
00KXXK0000K0000000000KXNNXKKK0KXXKXNNXXKkxK
----
more video streaming
Simple video streaming from Raspberry Pi to your computer with the raspicamera module and netcat
(you may want to enable the camera module in the raspi configuration)
sudo apt-get install netcat mplayer raspivid
PI camera data is send with udp using nc (netcat) :
- pi side:
raspivid -t 0 -vf -w 480 -h 360 -fps 25 -b 800000 -n -o - | nc -u IP_ADDRESS_OF_COMPUTER_SIDE YOUR_PORT
- computer side:
nc -lu YOUR_PORT | mplayer -fps 31 -cache 1024 -
Stream video to IceCast server
First install oggfwd and ffmpeg2theora
sudo apt-get install oggfwd ffmpeg2theora
Send your webcam to Icecast server
with an usb webcam :
ffmpeg2theora /dev/video0 -f video4linux2 --noaudio -o - | oggfwd 192.168.1.222 8000 relearn /YOURMOUNTPOINT.ogv
with the raspicam :
raspivid -t 0 -vf -n -w 480 -h 260 -fps 25 -b 800000 -o - | ffmpeg2theora --noaudio -o /dev/stdout - | oggfwd 79.99.202.57 8000 password /pi-cam.ogv
with a raspicam and a conversion locally on the server (without audio)
on the PI:
- raspivid -t 0 -vf -w 480 -h 260 -fps 25 -b 800000 -n -o - | nc -u SERVER_IP SERVER_PORT (SERVER_IP=192.168.1.222 SERVER_PORT=xxxx)
on the Server:
nc -lu -p SERVER_PORT | ffmpeg2theora --noaudio -o /dev/stdout - | oggfwd 127.0.0.1 8000 ICECAST_PASSWORD MOUNTPOINT.ogv
good version without sound
with an usb webcam and Gstreamer
gst-launch-0.10 v4l2src ! 'video/x-raw-yuv,width=320,height=240,framerate=15/1' ! videorate max-rate=1 ! queue ! theoraenc ! oggmux ! shout2send ip=79.99.202.57 password=xxxxxx mount=variable.ogg
stream audio and video with the raspicam in udp
to send
- #!/bin/bash
- gst-launch-0.10 -v alsasrc device=hw:1 ! audioconvert ! audioresample ! vorbisenc ! udpsink host=192.168.100.102 port=1234 &
- raspivid -t 0 -w 480 -h 260 -fps 25 -b 500000 -vf -n -o - | ffmpeg -i - -vcodec copy -f h264 udp://192.168.100.102:1235
to receive
- #!/bin/bash
- gst-launch -v udpsrc port=1234 ! vorbisdec ! audioconvert ! alsasink sync=false &
- ffplay udp://@:1235
Yet another video stream to an icecast server with Gstreamer and the raspicam
TX
- raspivid -t 0 -w 640 -h 360 -fps 25 -b 1200000 -o - | gst-launch -v fdsrc ! h264parse ! rtph264pay ! udpsink host=$clientIP port=$por
RX
- gst-launch -v udpsrc port=$port ! "application/x-rtp, media=(string)video, encoding-name=H264, payload=96" ! rtph264depay ! h264parse ! decodebin ! theoraenc ! oggmux ! shout2send ip=127.0.0.1 port=8000 password=relearn mount=mountpoint.ogv
TODO
sudp stream audio+video with raspivid)
TX
raspivid -t 0 -w 480 -h 260 -fps 25 -b 500000 -vf -n -o - | ffmpeg -i - -f alsa -i hw:1 -vcodec copy -acodec aac -strict experimental -f h264 udp://192.168.100.102:1234
RX (
ffplay udp://@:1234
TODO
stream to icecast with raspivid
#/!/bin/bash
raspivid -t 0 -w 320 -h 200 -fps 15 -b 400000 -o - | \
gst-launch-1.0 -v fdsrc ! h264parse ! decodebin !\
videorate max-rate=1 ! theoraenc ! oggmux ! shout2send ip=79.99.202.57 password=jonctions mount=whatever.ogg
configure a static ethernet ip adress
edit /etc/network/interfaces
iface lo inet loopback
#iface eth0 inet dhcp (remove this line )
iface eth0 inet static (and add this )
address 192.168.1.22x (223 and 224 are already taken)
netmask 255.255.255.0
gateway 192.168.1.1
Standard video encoding for the raspicam
SD Low: -w 480 -h 260 -fps 25 -b 800000
SD Medium: -w 640 -h 360 -fps 25 -b 1200000
SD High: -w 960 -h 540 -fps 25 -b 1800000
HD Ready: -w 1280 -h 720 -fps 25 -b 2500000
Full HD: -w 1920 -h 1080 -fps 25 -b 5000000
Installation instructions for v4l2 driver to use with raspicam
http://www.linux-projects.org/modules/sections/index.php?op=viewarticle&artid=14
ffmpeg -f v4l2 -i /dev/video0 -s 640x480 -c x264
INSTALL FFMPEG 2.0-1
1 - download the deb package
wget http://download1764.mediafire.com/6etl5yo4c9dg/a908toqrgbruacw/ffmpeg_deb-11-11-2013.tgz
2 - untar
tar -xvzf ffmpeg_deb-11-11-2013.tgz
3 - install package
( source @ http://askubuntu.com/questions/162740/how-do-i-uninstall-ffmpeg )
sudo apt-get --purge remove libav-tools
sudo apt-get --purge autoremove
sudo dpkg -i *.deb
sudo echo "/usr/local/lib" > /etc/ld.so.conf.d/libx264.conf
sudo ldconfig
Live HLS streaming with the Pi camera and soundcard
http://www.raspberrypi.org/forums/viewtopic.php?t=54669&p=415973
script with different method for streaming video
with a intermediate script which transcode video and send it to an icecast server
TX
- #!/bin/bash
- clientIP=192.168.1.6
- port=1984
- #1 UDP FFMPEG
- #raspivid -t 0 -w 640 -h 360 -fps 25 -b 1200000 -n -o - | ffmpeg -i - -vcodec copy -f h264 udp://$clientIP:$port
- #2 TCP FFMPEG
- #raspivid -t 0 -w 640 -h 360 -fps 25 -b 1200000 -n -o - | ffmpeg -i - -vcodec copy -r 25 -f h264 tcp://192.168.1.222:1984
- #3 RTP GST
- #raspivid -t 0 -w 640 -h 360 -fps 25 -b 1200000 -o - \
- #| gst-launch -v fdsrc ! h264parse ! rtph264pay ! udpsink host=$clientIP port=$port
- #4 (marchepas)
- #serverIP=192.168.1.223
- #clientIP=192.168.1.222
- #raspivid -t -1 -w 640 -h 360 -fps 25 -b 1200000 | \
- #gst-launch-0.10 -v fdsrc ! h264parse ! rtph264pay config-interval=1 pt=96 ! gdppay ! tcpserversink host=$serverIP port=5000
RX
- #!/bin/bash
- serverIP=192.168.1.223
- port=1984
- #1 UDP FFMPEG
- #ffmpeg2theora -f h264 -i udp://@:1984 -o /dev/stdout - | oggfwd 127.0.0.1 8000 relearn timber.ogv
- #2 TCP FFMPEG
- #ffmpeg2theora -f h264 -i tcp://192.168.1.222:1984?listen -o /dev/stdout - | oggfwd 127.0.0.1 8000 relearn timber.ogv
- #3 RTP GST
- gst-launch -v udpsrc port=$port !\
- "application/x-rtp, media=(string)video, encoding-name=H264, payload=96" ! \
- rtph264depay ! h264parse ! decodebin ! \
- theoraenc ! oggmux ! shout2send ip=127.0.0.1 port=8000 password=relearn mount=mountpoint.ogv
- #4 (marche pas)
- #gst-launch -v tcpclientsrc host=$serverIP port=5000 ! \
- #gdpdepay ! rtph264depay ! ffdec_h264 ! ffmpegcolorspace ! \
- #theoraenc ! oggmux ! shout2send ip=localhost port=8000 password=relearn mount=mountpoint.ogv