Binning gives one the possibility to expand the sensitivity of a camera by combining pixels into a super-pixel. Binning is the process of combining charge from adjacent pixels in a CCD during readout. To understand what these means you have to imagine a CCD images as a large grid of pixels. Like on any grid, the pixels are tiny squares set up in columns and rows based on the maximum resolution of the CCD. (IE: T11's array is 4008 pixels by 2672 pixels, so 4008 columns and 2672 rows.
During imaging photons hit the chip and the chip "fills up" the pixels in the grid based on how many photons hit each location on the chip. Once the exposure is complete and the shutter closes, the chip then begins to readout the data that it just received. This is where Binning takes place.
In Binning 1x1, the system will readout all the pixels from the last exposure in 1x1 blocks of pixels, meaning that it moves 1 pixel at a time. In the T11 example we're talking about 10,709,376 pixels, or 10.7 Megapixels.
As a second example, lets assume that each of those pixels will contain ~16e- Signal, and that when these pixels are readout, the ccd will produce ~16e- Noise. This means that every time a pixel is readout, you will end up with ~16e- signal/~16e- noise giving you a 1:1 Signal to Noise ratio (SNR)
If you were to change this to Binning 2x2, the system will instead combine pixels in blocks of 2 pixels x 2 pixels, and then move them as one large block of 4 pixels. Using the example above, this means the block of 4 pixels with combined signal of ~64e- will readout with the same ~16e- Noise because the 4 pixels are combined into one during the binning process. This gives you a 4:1 SNR but brings down your spatial resolution.
The main reason to use higher binning (2x2, 3x3 etc) is to improve signal to noise ratio and increase the frame rate of the image, as well as decrease readout time. This comes at a cost of spatial resolution however as you are effectively compressing the data. (Think of this like going from say a TIFF file to a jpeg file - although it is not nearly as severe)
In LRGB imaging, the standard is to use Binning 1x1 for Luminance filter since Luminance is primarily used for resolution and detail during the stacking process. For R, G, and B more members will use Binning 2x2 to produce better overall signal to noise ratio and because the loss of resolution on the color images will not matter as much since the Luminance will be there to supply it.
There are obviously exceptions based on conditions, number of images a member is shooting, etc. For example during a night with higher humidity, a member may shoot many exposures in Bin 2x2 to help clean up a bit of the SNR on any images he takes while the seeing is poor due to the humidity. Another example is if a member is only taking 2 Luminance, and 1 each of RGB, they may shoot all in 1x1 so they can use the RGB to enhance resolution as well, considering how little data they are combining.
This can be an odd concept to wrap ones mind around early on as it can be difficult to think of the images you see from our telescopes as data. However, the best way to look at it as an excel grid with a specific number assigned to each box based on the amount of photons that hit each portion of a chip. The image you see is just a visual representation for the data like a very complicated graph.