Almost everyone comes across data format in Vector or Raster format at some point of handling various kinds of data. Both these formats are in digital format hence understanding the actual difference, at times, to end users become difficult.
I just thought to pen down the technical difference between these two format for non-technical users.
In its simplest format, Raster format can be understood, as format where data is represented in the form of grids and each cell is the unit of measurement. Cell size could be of any size and can range from sub-Centimeters to Meters or Kilometers.
Simple example of this could be a scanning of any sheet (Which could be document/maps/picture etc) using scanner then scanning is done by scanner in the form of pre-defined grid and each grid is represented by another unit called DPI (Dot per Inch).
Another example could be satellite imagery in which each cell is represented by a grid and measurement of grid is defined as “PIXEL”. So for mathematical and modelling purposes: in Raster Format, each grid is independent and represented by its own size.
Whereas Vector can be understood as a format in which each object is represented on scale (X and Y) with numerous points on it, as shown in below example:
Below examples represent a traffic island in the form of Vector and aswell as in Raster
In my next post, I will focus on when one should use Raster and Vector format as each has its own distinct advantages.