Category Archives: Research

Testing Media Server Latency

Remus Negrota,
Product Manager

Published by Remus on | No Comments

AVChat 3, Flash Media Server, Red5, Research, Wowza

Latency and especially high latency is one of the main problems when it comes to real time communication between a client and a server, so we decided to do some testing that would ultimately tell us which of the 3 main media-servers (AMS, Wowza and Red5) can achieve the lowest latency.

The Testbed

The testing was done using our flagship product, AVChat 3, for both client and server side as application.

The client side of AVChat was installed on a local machine in Romania. The local machine has the following processing and memory specifications: Intel i5 CPU @ 3.30Ghz and 8 GB of RAM, on a Windows 7 x64 OS.

For the media server I’ve used a VPS located in New York and one in Amsterdam, both with the following specifications: 4 CPUs and 8 GB of RAM, on a Linux CentOS 6.5 x64 OS.

This test was done using just one connected client.

The delay was probed in two ways:

  1. Using an implemented  ‘ping’ like call from the client to the server that measured the round trip time (RTT) of the message:
  2. Turning on the live-stream and measuring the delay between the broadcast and the viewing of the stream by simply holding a stopwatch app in front of the camera and measuring the difference that was shown between the 2 videos, like so:

delay

Notice that in the image above there is a time difference of 120 ms between the two videos, which corresponds approximately with the RTT, shown in the green box, of 99 ms. The difference between the two obtained values of 21 ms can be accounted by the time it takes to encode the video on the broadcasters side and to decode the video on the viewer’s side.

The Actual Results

I’ve made a comparison table for all the 3 media-servers tested for both the RTT and the delay between broadcast stream and viewing stream and here are the results:

Media Server
RTT
Stream Delay
AMS 5.0.3 default settings146 - 229 ms390 ms - 520 ms
AMS 5.0.3 tweaked settings140 - 160 ms390 ms - 780 ms
Red5 1.0 RC1140 - 192 ms240 - 390 ms
Wowza Streaming Engine 4.0.3139 - 221 ms390 - 580 ms

The tweaked AMS settings mentioned in the table above are the ones Adobe recommends for obtaining lower latencies:

  • StreamManager/Live/Queue/MaxQueueSize  in Application.xml. Setting the MaxQueueSize to lower values reduces latency but is less efficient performance wise.
  • StreamManager/Live/Queue/MaxQueueDelay in Application.xml.  Decreasing the queue size reduces latency but is less efficient.

Overall these settings are designed to scale better with more clients connected at once. In this case is not really applicable as there is only one client connected.

The number of clients connected at any given time also plays a major role when it comes to latency. Some media-servers scale better in this regard but this is not the focus of our current experiment.

With that being said as you can see there are no major differences between the 3 media-servers when it comes to either RTT or stream delays.

To further the experiment I’ve made the same tests with an identical VPS only this time located in Amsterdam, so the connection was Romania – Amsterdam, instead of Romania – New York as previously tested, and here are the results:

Media Server
RTT
Stream Delay
AMS 5.0.3 default settings58 - 82 ms90 - 130 ms
Wowza Streaming Engine 4.0.362 - 87 ms80 - 140 ms
Red5 1.0 RC159 - 91 ms100 - 150 ms

After this final testing we can draw the following conclusion: the most important aspect when it comes to latency between client and server is the location of the server in relation to the location of the client.

Different technologies and tweaks may help with decreasing the latency but ultimately the distance between client and server will be the determining point.

Recording MP3 Using Only HTML5 and JavaScript (Recordmp3.js)

Remus Negrota,
Product Manager

Published by Remus on | 17 Comments

Research, Updates

With the continuous advancements of HTML 5, audio/video capture using only the browser has reached a turning point where it is possible to record but only on specific browsers. In this article we will be focusing on audio capture and more specifically on capturing audio from the microphone and encoding it to MP3.

The Name of the Game is getUserMedia()

Using the getUserMedia() API, you can capture raw audio input from your microphone.

We will get to the ‘how’ soon, but first off you will have to remember that this API is still in development and is not supported by all browsers and there is no standardized version yet. The best support can be found in Chrome, followed shortly by Firefox. For a more detailed look on the history and development of the API you can check the html5rocks article. Also for a thorough reference guide you can check the Mozilla Developer Network article.

The Recorder.js Library and libmp3lame.js Library

The starting point for the whole process of recording mp3 is represented by Matt Diamond’s Recorder.js, a plugin developed in JavaScript for recording/exporting the output of Web Audio API nodes. This library is under MIT license.

Recorder.js implements the capture audio functionality and saves it in wav format using getUserMedia. The problem with wav is that the files are non-compressed, therefore they take up a lot of space on the disk, just 1 minute of recording can take as much as 10 Megabytes.

The solution to this? Well let’s convert the wav file to mp3. Simply saving the wav file and then converting it will not do. We will need to convert the recording, to mp3, in real time, in the browser.

So how do we do that? A mp3 JavaScript library exists that can do that. It has been ported directly from the most used mp3 encoder, LAME Mp3 Encoder. LAME is a high quality MPEG Audio Layer III (MP3) encoder licensed under the LGPL.

The library’s name is libmp3lame.js and a minified ready to use version can be downloaded from github. The library is licensed under the LAME license terms.

So it looks like we got all the tools we need. Let’s take a look in detail on what we need to do next.

Putting It All Together

1. Making Recorder.js work on Firefox

I started off by modifying the Recorder.js project to my needs. The library in it’s default state works only in Chrome, so i modified that, so that it can capture audio in Firefox as well by changing the window.onload function from index.html like so:

window.onload = function init() {
 try {
 // webkit shim
 window.AudioContext = window.AudioContext || window.webkitAudioContext;
 navigator.getUserMedia = ( navigator.getUserMedia ||
 navigator.webkitGetUserMedia ||
 navigator.mozGetUserMedia ||
 navigator.msGetUserMedia);
 window.URL = window.URL || window.webkitURL;

 audio_context = new AudioContext;
 __log('Audio context set up.');
 __log('navigator.getUserMedia ' + (navigator.getUserMedia ? 'available.' : 'not present!'));
 } catch (e) {
 alert('No web audio support in this browser!');
 }

 navigator.getUserMedia({audio: true}, startUserMedia, function(e) {
 __log('No live audio input: ' + e);
 });
 };

2. Changing Recorder.js to Record Mono Wav Files

The recorder.js (i’m talking here about the main javascript file and not the name of the library), expects by default an in input from two data channels, because the initial implementation produced stereo wav files. For our purpose we will need to change that to mono recording, otherwise abnormal mp3 recordings will be produced with low pitched sounds. Also there is no specific need that the recording should be stereo in the first place, by default a normal microphone records in mono.

The first change that has to be made for a mono recording is to the onaudioprocess event function:

this.node.onaudioprocess = function(e){
 if (!recording) return;
 worker.postMessage({
 command: 'record',
 buffer: [
 e.inputBuffer.getChannelData(0),
 //e.inputBuffer.getChannelData(1)
 ]
 });
 }

I’ve just commented out the inputBuffer capturing for the second channel.

Next in the actual javascript worker (recorderWorker.js), I’ve made several modifications.

First in the record function, we must only capture the inputBuffer for the first channel, because the seconds won’t exist after our previous modification

function record(inputBuffer){
 recBuffersL.push(inputBuffer[0]);
 //recBuffersR.push(inputBuffer[1]);
 recLength += inputBuffer[0].length;
}

The second change comes to the exportWav method. Continuing the trend, we only need to process one audio channel.

function exportWAV(type){
 var bufferL = mergeBuffers(recBuffersL, recLength);
 //var bufferR = mergeBuffers(recBuffersR, recLength);
 //var interleaved = interleave(bufferL, bufferR);
 //var dataview = encodeWAV(interleaved);
 var dataview = encodeWAV(bufferL);
 var audioBlob = new Blob([dataview], { type: type });

this.postMessage(audioBlob);
}

And finally changes were made to the function that does all the encoding in order for the wav file to be produced (bit by bit). Here several lines have been replaced from stereo to their mono counterparts, exactly which ones are marked in the commented text in the source code:

function encodeWAV(samples){
 var buffer = new ArrayBuffer(44 + samples.length * 2);
 var view = new DataView(buffer);

/* RIFF identifier */
 writeString(view, 0, 'RIFF');
 /* file length */
 view.setUint32(4, 32 + samples.length * 2, true);
 /* RIFF type */
 writeString(view, 8, 'WAVE');
 /* format chunk identifier */
 writeString(view, 12, 'fmt ');
 /* format chunk length */
 view.setUint32(16, 16, true);
 /* sample format (raw) */
 view.setUint16(20, 1, true);
 /* channel count */
 //view.setUint16(22, 2, true); /*STEREO*/
 view.setUint16(22, 1, true); /*MONO*/
 /* sample rate */
 view.setUint32(24, sampleRate, true);
 /* byte rate (sample rate * block align) */
 //view.setUint32(28, sampleRate * 4, true); /*STEREO*/
 view.setUint32(28, sampleRate * 2, true); /*MONO*/
 /* block align (channel count * bytes per sample) */
 //view.setUint16(32, 4, true); /*STEREO*/
 view.setUint16(32, 2, true); /*MONO*/
 /* bits per sample */
 view.setUint16(34, 16, true);
 /* data chunk identifier */
 writeString(view, 36, 'data');
 /* data chunk length */
 view.setUint32(40, samples.length * 2, true);

floatTo16BitPCM(view, 44, samples);

return view;
}

With these changes the Recorder.js library will now produce mono wav files, exactly what we need.

3. Integrating libmp3lame.js With Recorder.js

For my project i used the compiled minified version of libmp3lame.js available in the github project.

The mono wav files produced in step 2 will be returned in blob format. From here we can start doing the real-time conversion from wav to mp3. This process starts in the worker.onmessage event handler function. Using a FileReader, i’ve read the blob as an array buffer: fileReader.readAsArrayBuffer(blob). 

worker.onmessage = function(e){
 var blob = e.data;
 //console.log("the blob " + blob + " " + blob.size + " " + blob.type);

 var arrayBuffer;
 var fileReader = new FileReader();

 fileReader.onload = function(){
 arrayBuffer = this.result;
 var buffer = new Uint8Array(arrayBuffer),
 data = parseWav(buffer);

 console.log(data);
 console.log("Converting to Mp3");
 log.innerHTML += "\n" + "Converting to Mp3";

encoderWorker.postMessage({ cmd: 'init', config:{
 mode : 3,
 channels:1,
 samplerate: data.sampleRate,
 bitrate: data.bitsPerSample
 }});

encoderWorker.postMessage({ cmd: 'encode', buf: Uint8ArrayToFloat32Array(data.samples) });
 encoderWorker.postMessage({ cmd: 'finish'});
 encoderWorker.onmessage = function(e) {
 if (e.data.cmd == 'data') {

 console.log("Done converting to Mp3");
 log.innerHTML += "\n" + "Done converting to Mp3";

 /*var audio = new Audio();
 audio.src = 'data:audio/mp3;base64,'+encode64(e.data.buf);
 audio.play();*/

 //console.log ("The Mp3 data " + e.data.buf);

 var mp3Blob = new Blob([new Uint8Array(e.data.buf)], {type: 'audio/mp3'});
 uploadAudio(mp3Blob);

 var url = 'data:audio/mp3;base64,'+encode64(e.data.buf);
 var li = document.createElement('li');
 var au = document.createElement('audio');
 var hf = document.createElement('a');

 au.controls = true;
 au.src = url;
 hf.href = url;
 hf.download = 'audio_recording_' + new Date().getTime() + '.mp3';
 hf.innerHTML = hf.download;
 li.appendChild(au);
 li.appendChild(hf);
 recordingslist.appendChild(li);

 }
 };
 };

 fileReader.readAsArrayBuffer(blob);

 currCallback(blob);
 }

Next, the buffer is parsed by the parseWav function and the encoding process begins using encoderWorker. This worker is initialized by mp3Worker.js which is the javascript file that imports the minified version of libmp3lame.js. Here is where it all comes together, the final product of this worker being a Uint8Array of mp3 data.

Once the encoding is done a new blob object is created from the Uint8Array ready to be downloaded and listened to through a standard HTML audio control. The mp3 is also automatically saved to disk with the help of AJAX and a PHP data writing script.

4. Done

That’s it. We’ve achieved what we’ve set out for. We’ve created mp3 audio recordings directly from a browser using nothing more than JS and HTML.

A live demo is available here ( Chrome and Firefox only ).

The whole code is available to download on GitHub under the Recordmp3.js project (includes the modified Recorder.js).

The modified Recorder.js version  is also available separately as a fork of the original Record.js project here.

Known issues

The resulting mp3 recording will be longer by aproximetely 50%. So a 5 seconds recording will have a 10 seconds duration, with the last 5 seconds being empty. This may be caused by a buffer problem.

What Is the State of WebRTC?

Remus Negrota,
Product Manager

Published by Remus on | No Comments

Research

Real-Time Communication Without Plugins

Logo-webrtc

WebRTC stands for  Web Real-Time Communication, and it is a peer-to-peer communication technology for the browser, that enables video/audio calling and data sharing without additional plugins. WebRTC started as an effort by Google, to build a standard real-time Media Engine into all the available major browsers, and is now supported by Google, Mozilla and Opera. The API and underlying protocols are being developed jointly at the W3C and IETF. Similar attempts at implementing peer-to peer communications over the web have been made before by Adobe through their acquisition of Amicima in 2006  and subsequent Flash Player 10 (October 2008) and 10.1 (June 2010) releases. Somehow the peer-to-peer technology in Flash Player never took off.

The guiding principles of WebRTC project are that it is a free, standardized, open sourced project that enables Real-Time Communication across different browsers, using simple Javascript APIs.

Now you may say: “But we already have real-time communication technologies such as Flash Player (with AMS) and WebSockets , there is no need for WebRTC”.

All 3 are slightly different:

  • The WebSockets technology is all about providing a reliable real-time data connection via Javascript.
  • Flash Player uses RTMFP (Real Time Media Flow Protocol, UDP based) developed by Adobe and needs Adobe Media Server Extended or the Adobe Cirrus Service  to enable signaling and NAT transversal
  • WebRTC provides a browser infrastructure for real-time communication but provides no server side tool for signaling and NAT transversal.

The UDP Protocol Enabled in the Browser

WebRTC primarily uses the UDP protocol, which is a lot faster than TCP because it doesn’t deal with packet order or error correction. UDP is used in cases when only the latest piece of data is the most important and there is no need to wait for previous data. VoIP and multiplayer games are a very good example of applications that benefot from these characteristics of the UDP protocol. WebRTC makes UDP available in the browser without additional plugins.

What is the potential for WebRTC use?

WebRTC is primarily known for being a peer-to-peer, audio & video calling technology between browsers, similar to Skype, but WebRTC can do much more than that.

  • Collaborative activities
  • Multiplayer games in the browser
  • Peer-to-peer file sharing
  • Peer-to-peer CDN
  • Remote control of devices

So, as you can see, WebRTC has a high potential across multiple sides of technology. But what drives all of these capabilities , what are the inner workings used to produce such web apps? The answer my friends is a bunch of Javascript APIs that we will discuss in the next part.

The Javascript APIs

Currently WebRTC has three APIs:

  • MediaStream (aka getUserMedia)
  • RTCPeerConnection
  • RTCDataChannel

getUserMedia, like the name suggests, gets the video and audio, if available, from an input (ex web-cam), and outputs it in the browser via the  HTML5 <video> tag. To see it in action take a look at this cross-browser demo.

RTCPeerConnection lets you make peer-to-peer connections and attach media streams like video and audio. The name of the Chrome implementation of the API is prefixed with webkit. Firefox Aurora/Nightly named it mozRTCPeerConnection. When the standardization process will be complete the prefixes will be removed. Here’s a link to a demo of Chrome’s RTCPeerConnection. RTCPeerConnection is needed by video chat apps. Here is an example of this API in action: video-chat application.

RTCDataChannel lets you send arbitrary data across peer-to-peer connections.

The 3 APIs are supported as followed:

  1. PC/Mac
    • Google Chrome 23 (released on the 6th of  November 2012)
    • Mozilla Firefox 22 (released on the 25th of June 2013)
    • Opera 18  (released on the 18th of NOvember 2013)
    • Internet Explorer has no native support for WebRTC but it can be added using Chrome Frame (development for Chrome Frame is no longer active).
    • Safari not supported
  2. Android
    • Google Chrome version 28 (Needs configuration at chrome://flags/)
    • Mozilla Firefox version 24 (also behind a flag)
    • Opera Mobile version 12 (only supports getUserMedia, no real peer-to-peer support)
  3. Google Chrome OS
  4. WebRTC is also supported by the Ericsson Bowser browser which runs on Android and iOS.

For a detailed comparison between browsers you can access this link.

MediaStream, aka  getUserMedia, in Detail

The MediaStream API is the part of WebRTC describing a stream of audio or video data, the methods for working with them, the success and error callbacks when using the data asynchronously, and the events that are fired during the process. Each MediaStream has an input, that can be a LocalMediaStream generated by navigator.getUserMedia(), and an output which could be passed to a video element from HTML5 or an RTCPeerConnection.

The getUserMedia() method takes three parameters:

  • A constraint object
  • A success callback function that passes the LocalMediaStream
  • A failure callback that passes an error object

Here’s an example of a simple implementation:code_getUserMedia

RTCPeerConnection in Detail

 RTCPeerConnection is the API used by WebRTC to communicate streaming data between browsers. It also needs a mechanism to coordinate communication and to send control messages, a process known as signaling. These signaling methods and protocols are not part of the RTCPeerConnection API. This way developers can choose what messaging protocol to use (ex: SIP or XMPP). Google provides the Channel API as a signaling mechanism. WebRTC has also been proven to work using WebSockets for signaling.

Signaling is used to exchange three types of information:

  • Session control messages: to initialize or close communication and report errors.
  • Network configuration: to the outside world, what’s my computers IP address and port.
  • Media capabilities: what codecs and resolutions can be handled by my browser and the browser it wants to communicate with.

All of this information must be exchanged successfully before a peer-to-peer streaming can be established.

Here’s a code sample from the WebRTC W3C Working Draft, which shows the signaling process in action. (The code assumes the existence of some signaling mechanism, created in the createSignalingChannel() method. Also note that on Chrome, RTCPeerConnection is currently prefixed.)

The code from the link above shows a simplified version of WebRTC from a signaling perspective. In the real world, WebRTC needs servers, however simple, in order to achieve the following:

  • Users discover each other and exchange real world information.
  • WebRTC client apps (peers) exchange network information.
  • Peers exchange media capabilities such as video format and resolution.
  • WebRTC client apps traverse NAT gateways and firewalls.

The requirements for building a server, NAT traversal and peer-to-peer networking exceed the scope of this article, however,it is important to remember the following: WebRTC uses the ICE protocol which in turn uses the STUN and it’s extension TURN protocol, to enable peer-to-peer communications (this is needed in order to enable peers behind a NAT, to find out the IP address and port). Google provides several STUN servers already.

WebRTC, as currently implemented, only supports one-to-one communication but can be used in more complex network scenarios: for example, with multiple peers each communicating each other directly, peer-to-peer, or via a centralized server.

So as we can see, WebRTC also needs a middleman (some kind of server) to handle peer to peer connections. Adobe provides the Adobe Cirrus Beta Service and the Adobe Media Server Extended to handle signaling for peer to peer apps developed in Flash Player using rtmfp.

One good example of an application that uses peer-to-peer technology and also needs servers in order to make user discovery and communication is Skype.

RTCDataChannel in Detail

The RTCDataChannel is a WebRTC API for high performance, low latency, peer-to-peer communication of arbitrary data. The API is simple—similar to WebSocket—but communication occurs directly between browsers, so RTCDataChannel can be much faster than WebSocket even if a relay (TURN) server is required when ‘hole punching’ to cope with firewalls and NATs fails.

Potential applications for the RTCDataChannel API are:

  • Gaming
  • File transfer
  • Real-time text chat
  • Remote desktop applications
  • Decentralized networks

The API provides several features that make the most out RTCPeerConnection:

  • Reliable and unreliable delivery semantics.
  • Built-in security (DTLS) and congestion control.
  • Multiple simultaneous channels with prioritization.
  • Ability to use with or without audio and video.
  • Leveraging of RTCPeerConnection session setup.

Video and Audio Codecs

As stated on the official WebRTC FAQ the currently supported codecs are:

Audio:

  • G.711
  • G.722
  • iLBC
  • iSAC

Video:

  • VP8

The codecs included in the WebRTC project are subject to change.

The huge dilemma right now is how will cross browser communication will actually work. Browsers differ in the audio and video codecs they support. So for example if Chrome encodes the video with VP8 and sends it to Firefox, and Firefox does not know how to decode VP8, then communication would not be possible.

At this time there is a war going on between  Google and Ericsson, on which codecs should be used as a standard for WebRTC.

Google’s side: VP8 for video and Opus for voice along with G.711. All are royalty free and provide high quality.

Ericsson’s side: H.264 for video with the prospect of H.265, G.719 or AMR-NB. Maybe even AMR-WB and EVS. ITU standards impose these codecs.

Microsoft is also pushing a proposal of their own for WebRTC called CU-RTC-Web, but for now this only remains a proposal.

Google comes from the Internet world. In it, royalty free is an asset, making VP8 a better option than H.264. The selection of Opus, which is a royalty free audio codec, comes from the fact that it was developed and standardized by the IETF (where the Internet lives) and is a “derivative work” of Skype’s SILK codec. It is considered a good codec, but for now it is not included in the WebRTC project.

Ericsson’s is coming from the mobile and the ITU standardization work. All codecs suggested by Ericsson come either from the ITU or mobile (3GPP), so it makes sense for them to support this angle. Ericsson also has patents in H.264, making it a benefactor of royalty payments from the use of this codec.

Microsoft? They’re on Ericsson’s side. They are looking for more options, power, flexibility. But by doing that, they are complicating the solution and probably running it to the ground.

If the IETF will settle on multiple codecs for voice and video as mandatory ones this is going to be bad for the industry: the simple solution should win – it will make it easier for companies to adopt and for disruption to appear. If we will end up with 4 or more voice codecs and 2 or more video codecs, then we are in for the same hurdles, of not having a standardized set of codecs, we have today with other VoIP standards.

For now it seems Google has the upper hand. Hope it stays this way, but if the multiple codecs approach will be adopted, transcoding will be needed for communications between browsers, and that is not a good thing at all, because transcoding adds latency, reduces quality, it is expensive and a trusted third-party will always have to be involved.

Security

There are several ways a real-time communication application or plugin might compromise security. For example:

  • Unencrypted media or data might be intercepted en route between browsers, or between a browser and a server.
  • An application might record and distribute video or audio without the user knowing.
  • Malware or viruses might be installed alongside an apparently innocuous plugin or application.

 WebRTC provides the following solutions to avoid these problems:

  • WebRTC implementations use secure protocols such as DTLS and SRTP.
  • Encryption is mandatory for all WebRTC components, including signaling mechanisms.
  • WebRTC is not a plugin: its components run in the browser sandbox and not in a separate process, components do not require separate installation, and are updated whenever the browser is updated.
  • Camera and microphone access must be granted explicitly and, when the camera or microphone are running, this is clearly shown by the user interface.

WebRTC vs P2P implementation in Flash Player

Adobe Cirrus (formerly known as Adobe Stratus) enables peer assisted networking using the Real Time Media Flow Protocol (RTMFP) within the Adobe Flash Platform. In order to use RTMFP, Flash Player endpoints must connect to an RTMFP-capable server, such as the Cirrus service. Cirrus is a beta, hosted rendezvous service that aids establishing communications between Flash Player endpoints. This is a free service. The second solution is the purchasing of Adobe Media Server Extended edition, the only version that supports RTMFP.

Let’s see some comparison between Adobe’s approach and WebRTC.

 
WebRTC
P2P implementation in Flash Player
Availability DateMay 2011October 2008
Maturity DateNot YetJune 2010
Servers NeededService server and STUN/TURN serversRTMFP enabled Servers (Adobe Media Server Extended which costs around 45000 $ or Adobe Cirrus Service which is free)
Encryption TechnologyDTLS and SRTP 128-bit AES
NAT TraversalICE protocol and STUN/TURN protocolsTURN protocol
Famous Apps Using the TechnologyNoneNone

Conclusions

The standards and APIs of WebRTC are still in the working and the technology behind it is still not fully developed, fact that can be seen by not being fully supported yet across all platforms and browsers. Adobe also spent 2 to 4 years developing peer-to-peer communication technology, but it never really caught any real attention and that is because technology should walk in the steps of desire and need and not the other way around.

Block an IP on a linux server

Alin Oita,
Director of Tech Support

Published by alin on | No Comments

Research, Tips & Tricks

You probably find out find that your server is being attacked by brute force SSH attacks, port iptablesscanning, viruses scanning for the ability to spread, things like that or for some other reason  you want to block a specific IP on your server.

In this article, I’ll show you how to block an IP address on Linux server using IPTables.

First, I’ll assume you are already using iptables. You can check that using this command line:
iptables -V

Second, you have to create 2 shell script files in /etc/init.d folder.
So go in the folder with cd /etc/init.d and create the first file:
vi blockip.sh

press “i”
paste this script inside

#!/bin/bash

#blocking iptables
/sbin/iptables -A INPUT -s $1 -j DROP

#saving iptables
/sbin/iptables-save > /etc/sysconfig/iptables

save the file by pressing the escape key then :wq!

Then, create the second file:
vi allowip.sh

press “i”
paste this script inside

#!/bin/bash

#allowing iptables
/sbin/iptables -D INPUT -s $1 -j DROP

#saving iptables
/sbin/iptables-save > /etc/sysconfig/iptables
 save the file like you did before

Now, you can use sudo ./blockip.sh 1.2.3.4 (where 1.2.3.4 is the IP you want to block) anywhere on the server.

To check if the IP was added, use this: /sbin/iptables -L INPUT -v -n | grep 1.2.3.4

To remove the IP from blocked list, use sudo ./allowip.sh 1.2.3.4

 

Hope this helped.

Camjacking and Chrome’s NEW extra confirmation dialog

Remus Negrota,
Product Manager

Published by Remus on | No Comments

Research, Tips & Tricks, Updates

Google has released on June 18 2013 Chrome version 27.0.1453.116 for Windows, Macintosh and Chrome Frame platforms that addresses a huge vulnerability issue with Flash Player.

This issue is a specific type of clickjacking now known as camjacking, and it basically tricks users into pressing the “Allow” button in the Flash Player Settings window.

allowFlash

This issue has been fixed by Adobe since October 2011, but somehow it could still be leveraged in Chrome to hijack web-cams and microphones.

A proof-of-concept  (not safe for work) (Chrome only) was developed by security researcher Egor Homakov @homakov to explain this exploit. This issue was first reported by @typicalrabbit in a blog post on http://habrahabr.ru of which the translated version can be found here.

The proof shows a slide-show of pictures with girls, and right in the middle of it, there’s a play button. If the play button is pressed, the user is actually allowing access to his/her web-cam.

poc

This is done by placing the Flash Player Settings window in an invisible layer with the “Allow” button behind the play button that is shown. And just like that cyber-criminals will have access to your cam and microphone without you even knowing.

Chrome introduced an additional prompt for access to web-cam and microphone. This notification is built in the browser itself, so even if the Flash Player Settings window is hidden like in the example explained above, the Chrome notification still triggers. Below you can see the prompt asking for permission. Until this is also approved no website will be allowed to have access to a person’s web-cam.

allowChrome

Once allowed a setting menu can be accessed by clicking SNP_2692429_en_v1 that can be found in the top right corner next to the Chrome menu button. When clicked the following window will be shown, that allows you to manage your camera and microphone.

camSettings

These privacy options can also be accessed via the Chrome settings like so:

  1. Click the Chrome menu button SNP_2696434_en_v1  on the browser toolbar.
  2. Select Settings.
  3. Click Show advanced settings.
  4. In the “Privacy” section, click Content settings.
    • In the “Media” section:
      • Ask me when a site requires access to my camera and microphone: Select this option if you want Chrome to alert you whenever a site requests access to your camera and microphone.
      • Do not allow sites to access my camera and microphone: Select this option to automatically deny any site requests to access your camera and microphone.
    • Click Manage exceptions to remove previously-granted permissions for specific sites.

Conclusions:

The vulnerability flaw is now properly fixed, and your web-cam is now under your full control.