summaryrefslogtreecommitdiff
path: root/pjmedia/docs
diff options
context:
space:
mode:
authorBenny Prijono <bennylp@teluu.com>2006-01-30 18:40:05 +0000
committerBenny Prijono <bennylp@teluu.com>2006-01-30 18:40:05 +0000
commit0d61adeb5f784b45f76d76dad9974f4111fb3c8c (patch)
tree4fe8830715bd6af57dd91ebca780318a645435cd /pjmedia/docs
parent7638eeee106fe58a1225f642e733629f29418818 (diff)
Finished implementation of UA layer (to be tested)
git-svn-id: http://svn.pjsip.org/repos/pjproject/trunk@127 74dad513-b988-da41-8d7b-12977e46ad98
Diffstat (limited to 'pjmedia/docs')
-rw-r--r--pjmedia/docs/PJMEDIA.txt11
1 files changed, 11 insertions, 0 deletions
diff --git a/pjmedia/docs/PJMEDIA.txt b/pjmedia/docs/PJMEDIA.txt
new file mode 100644
index 00000000..8d6a9105
--- /dev/null
+++ b/pjmedia/docs/PJMEDIA.txt
@@ -0,0 +1,11 @@
+The way PJMEDIA works at the moment is, for each destination party (e.g. remote INVITE party), we have one media "session". For each "m" line in the SDP, PJMEDIA creates one media "stream". If the stream is a bi-directional audio, then for each stream, two media "channels" will be created, so one media channel for each direction.
+
+The two channels in one stream share one instance of "codec". A codec is simple struct that provides encode() and decode() functions.
+
+The media channels will end up in the appropriate "sound stream". The decoder channel (i.e. RTP receiver) will end up in sound player stream, and the encoder channel (i.e. RTP sender) gets the audio frames from sound recorder stream.
+
+Both sound player and recorder devices (or streams) are active objects (they have their own threads). The media channel only needs to register callback function to be called when audio frames are available (or should be supplied) from/to the sound devices. This approach works very well with DirectSound, or with PortAudio's sound framework.
+
+But with the introduction of jitter buffer, another thread needs to be created for the decoder channel. The thread reads RTP from socket on a periodic basis, and put the frame (still encoded) to jitter buffer. When the sound player callback is called (by sound device), it looks for frame in the jitter buffer (instead of reading RTP socket), decode the frame, and return the PCM frame to the sound player.
+
+Now getting back to the topic why I think this could work as it is for your application. \ No newline at end of file