summaryrefslogtreecommitdiff
path: root/pjmedia/docs
diff options
context:
space:
mode:
authorBenny Prijono <bennylp@teluu.com>2006-02-15 14:20:48 +0000
committerBenny Prijono <bennylp@teluu.com>2006-02-15 14:20:48 +0000
commit7bc58324b88728bc502308c9cb385a12c0ba432e (patch)
tree2a827c0cda142e7491d08888175666eddc4041b9 /pjmedia/docs
parenta2e7480ba5bb56d5b7ea4772ea7790f278bc4ecf (diff)
Removed obsolete PJMEDIA.txt from docs
git-svn-id: http://svn.pjsip.org/repos/pjproject/trunk@191 74dad513-b988-da41-8d7b-12977e46ad98
Diffstat (limited to 'pjmedia/docs')
-rw-r--r--pjmedia/docs/PJMEDIA.txt11
1 files changed, 0 insertions, 11 deletions
diff --git a/pjmedia/docs/PJMEDIA.txt b/pjmedia/docs/PJMEDIA.txt
deleted file mode 100644
index 8d6a9105..00000000
--- a/pjmedia/docs/PJMEDIA.txt
+++ /dev/null
@@ -1,11 +0,0 @@
-The way PJMEDIA works at the moment is, for each destination party (e.g. remote INVITE party), we have one media "session". For each "m" line in the SDP, PJMEDIA creates one media "stream". If the stream is a bi-directional audio, then for each stream, two media "channels" will be created, so one media channel for each direction.
-
-The two channels in one stream share one instance of "codec". A codec is simple struct that provides encode() and decode() functions.
-
-The media channels will end up in the appropriate "sound stream". The decoder channel (i.e. RTP receiver) will end up in sound player stream, and the encoder channel (i.e. RTP sender) gets the audio frames from sound recorder stream.
-
-Both sound player and recorder devices (or streams) are active objects (they have their own threads). The media channel only needs to register callback function to be called when audio frames are available (or should be supplied) from/to the sound devices. This approach works very well with DirectSound, or with PortAudio's sound framework.
-
-But with the introduction of jitter buffer, another thread needs to be created for the decoder channel. The thread reads RTP from socket on a periodic basis, and put the frame (still encoded) to jitter buffer. When the sound player callback is called (by sound device), it looks for frame in the jitter buffer (instead of reading RTP socket), decode the frame, and return the PCM frame to the sound player.
-
-Now getting back to the topic why I think this could work as it is for your application. \ No newline at end of file