Video stabilization is an essential part of video processing technology for scenes taken under very shaky conditions. From audience perspective, extraction of information from such video sources can be distracting, thus making it very difficult to concentrate and exhausting to track the target of interest from the scenes. In extreme cases, it is impossible to identify the details from the scene with large variations when the frames are averaged through our eyes’ perception. The motivation of this research work is to find a robust algorithm to immobilize the video, whether live or recorded ones, to compensate the vibrations from physical settings of the cameras. The steps undertaken to achieve this goal include enhancing each frame image to improve the visibility, measuring of multi-frame variations, identifying and relocating the optimal feature points, tracking of the features, and defining run-time parameters for determination of the transforms necessary to stabilize the video sequence. Once the algorithm is developed, our extended step is to deploy the work into hardware implementation for embedded system so that the massive parallel processing capability could be exploited to achieve real-time throughput. Design of such architecture is absolutely necessary for video processing applications that demand very high volume of pixels at possibly higher frame rates than conventional cameras can support. The research tools involved include Matlab, VC++, DirectX, OpenCV, and Xilinx’s FPGA prototyping package for software and hardware co-development.