Frame Rate

frame rate by francesco frankavilla

Frame rate is the frequency (rate) at which an imaging device produces unique consecutive images called frames. The term applies equally well to computer graphics, video cameras, film cameras, and motion capture systems. Frame rate is most often expressed in frames per second (FPS), and is also expressed in progressive scan monitors as hertz (Hz).

The human visual system can process 10 to 12 separate images per second, perceiving them individually. The visual cortex holds onto one image for about one-fifteenth of a second, so if another image is received during that period an illusion of continuity is created, allowing a sequence of still images to give the impression of motion.

Early silent films had a frame rate from 14 to 24 FPS but by using projectors with dual- and triple-blade shutters the rate was multiplied two or three times as seen by the audience. Thomas Edison said that 46 frames per second was the minimum: ‘anything less will strain the eye.’ In the mid- to late-1920s, the frame rate for silent films increased to about 20 to 26 FPS. When sound film was first introduced in 1926, variations in film speed were no longer tolerated as the human ear was more sensitive to changes in audio frequency. From 1927 to 1930, the rate of 24 FPS became standardized for 35 mm sound film; a speed of 455 millimeters (17.9 in) per second. This allowed for simple two-blade shutters to give a projected series of images at 48 per second. Many modern 35 mm film projectors use three-blade shutters to give 72 images per second—each frame flashed on screen three times.

There are three main frame rate standards in the TV and movie-making business: 24p, 25p, and 30p. However, there are many variations on these as well as newer emerging standards. 24p is a progressive format and is now widely adopted by those planning on transferring a video signal to film. Film and video makers use 24p even if their productions are not going to be transferred to film, simply because of the on-screen ‘look’ of the (low) frame rate which matches native film. When transferred to NTSC television, the rate is effectively slowed to 23.976 FPS, and when transferred to PAL or SECAM it is sped up to 25 FPS. 35 mm movie cameras use a standard exposure rate of 24 FPS, though many cameras offer rates of 23.976 FPS for NTSC television and 25 FPS for PAL/SECAM. The 24 FPS rate became the de facto standard for sound motion pictures in the mid-1920s. Practically all hand-drawn animation is designed to be played at 24 FPS. Actually hand-drawing 24 unique frames per second (‘1’s’) is costly. Even big budget films usually hand-draw animation shooting on ‘2’s’ (one hand-drawn frame is shown twice, so only 12 unique frames per second) and a lot of animation is drawn on ‘4’s’ (one hand-drawn frame is shown four times, so only six unique frames per second).

25p derives from the PAL television standard of 50i (or 50 interlaced fields per second). Like 24p, 25p is often used to achieve ‘cine’-look, albeit with virtually the same motion artifacts. It is also better suited to progressive-scan output (e.g., on LCD displays, computer monitors, and projectors) because the interlacing is absent. 30p mimics a film camera’s frame-by-frame image capture. The effects of inter-frame judder are less noticeable than 24p yet retains a cinematic-like appearance. Shooting video in 30p mode gives no interlace artifacts but can introduce judder on image movement and on some camera pans. The widescreen film process Todd-AO used this frame rate in the 1950s.

60i (60 interlaced frames per second) is the standard for NTSC television (e.g. in the US), whether from a broadcast signal, DVD, or home camcorder. It was developed as part of the NTSC television standards mandated by the FCC in 1941. When NTSC color was introduced in 1953, the older rate of 60 fields per second was reduced to avoid interference between the chroma subcarrier and the broadcast sound carrier. 50p/60p is a progressive format and is used in high-end HDTV systems. 48p is a progressive format currently being trialed in the film industry. At twice the traditional rate of 24p, this frame rate attempts to reduce motion blur and flicker found in films. Director James Cameron stated his intention to film the two sequels to his film ‘Avatar’ at a higher frame rate than 24 frames per second, in order to add a heightened sense of reality. The first film to be filmed at 48 FPS was ‘The Hobbit,’ a decision made by its director Peter Jackson. However, at a preview screening at CinemaCon, the audience’s reaction was mixed after being shown some of the film’s footage at 48p, with some arguing that the feel of the footage was too life-like (thus breaking the suspension of disbelief).

72p is a progressive format and is currently in experimental stages. Modern cameras such as the Red One can use this frame rate to produce slow motion replays at 24 FPS. Douglas Trumbull, who undertook experiments with different frame rates that led to the Showscan film format, found that emotional impact peaked at 72 FPS for viewers. Higher frame rates, including 300 FPS, have been tested by BBC Research over concerns with sports and other broadcasts where fast motion with large HD displays could have a disorienting effect on viewers. 300 FPS can be converted to both 50 and 60 FPS transmission formats without major issues.

Frame rate is also a term used in real-time computing; real-time frame is the time it takes to complete a full round of the system’s processing tasks. If the frame rate of a real-time system is 60 hertz, the system reevaluates all necessary inputs and updates the necessary outputs 60 times per second under all circumstances. Owing to their flexibility, software-based video formats can specify arbitrarily high frame rates, and many (cathode ray tube) consumer PC monitors operate at hundreds of frames per second, depending on the selected video mode. LCD screens are usually 24, 25, 50, 60, or 120 FPS. The designed frame rates of real-time systems vary depending on the equipment. For a real-time system that is steering an oil tanker, a frame rate of 1 FPS may be sufficient, while a rate of even 100 FPS may not be adequate for steering a guided missile. The designer must choose a frame rate appropriate to the application’s requirements.

Frame rates in video games refer to the speed at which the image is refreshed. Many underlying processes, such as collision detection and network processing, run at different or inconsistent frequencies or in different physical components of a computer. FPS affect the experience in two ways: low FPS does not give the illusion of motion effectively and affects the user’s capacity to interact with the game, while FPS that vary substantially from one second to the next depending on computational load produce uneven, ‘choppy’ movement or animation. Many games lock their frame rate at lower but more sustainable levels to give consistently smooth motion.

The first 3D first-person shooter game for a personal computer, 3D Monster Maze, had a frame rate of approximately 6 FPS, and was still a success. In modern action-oriented games where players must visually track animated objects and react quickly, frame rates of between 30 to 60 FPS are considered acceptable by most, though this can vary significantly from game to game. Modern action games, including popular console shooters such as ‘Halo 3,’ are locked at 30 FPS maximum, while others, such as ‘Unreal Tournament 3,’ can run well in excess of 100 FPS on sufficient hardware. Additionally some games such as ‘Quake 3 Arena’ perform physics, AI, networking, and other calculations in sync with the rendered frame rate – this can result in inconsistencies with movement and network prediction code if players are unable to maintain the designed maximum frame rate of 125 FPS. The frame rate within games varies considerably depending upon what is currently happening at a given moment, or with the hardware configuration (especially in PC games.) When the computation of a frame consumes more time than is allowed between frames, the frame rate decreases.

A culture of competition has arisen among game enthusiasts with regard to frame rates, with players striving to obtain the highest FPS possible, due to their utility in demonstrating a system’s power and efficiency. Indeed, many benchmarks (such as 3DMark) released by the marketing departments of hardware manufacturers and published in hardware reviews focus on the FPS measurement. Even though the typical LCD monitors of today are locked at 60 Hz, making extremely high frame rates impossible to see in realtime, playthroughs of game ‘timedemos’ at hundreds or thousands of FPS for benchmarking purposes are still common. Beyond measurement and bragging rights, such exercises do have practical bearing in some cases. A certain amount of discarded ‘headroom’ frames are beneficial for the elimination of uneven output, and to prevent FPS from plummeting during the intense sequences when players need smooth feedback most. A high frame rate still does not guarantee fluid movements, especially on hardware with more than one GPU. This effect is known as micro stuttering. Aside from frame rate, a separate but related factor unique to interactive applications such as gaming is latency. Excessive preprocessing can result in a noticeable delay between player commands and computer feedback, even when a full frame rate is maintained, often referred to as input lag.

Without realistic motion blurring, video games and computer animations do not look as fluid as film, even with a higher frame rate. When a fast moving object is present on two consecutive frames, a gap between the images on the two frames contributes to a noticeable separation of the object and its afterimage in the eye. Motion blurring mitigates this effect, since it tends to reduce the image gap when the two frames are strung together. The effect of motion blurring is essentially superimposing multiple images of the fast-moving object on a single frame. Motion blurring makes the motion more fluid for some people, even as the image of the object becomes blurry on each individual frame.

The human visual system does not see in terms of frames; it works with a continuous flow of light information. A related question is, ‘How many frames per second are needed for an observer to not see artifacts?’ However, there is no single answer to this question. If the image switches between black and white each frame, the image appears to flicker at frame rates slower than 30 FPS. In other words, the flicker fusion point, where the eyes see gray instead of flickering tends to be around 60 FPS. However, fast moving objects may require higher frame rates to avoid artifacts — and the retinal fusion point can vary in different people, as in different lighting conditions. The flicker-fusion point can only be applied to digital images of absolute values, such as black and white. Whereas a more analogous representation can run at lower frame rates, and still be perceived by a viewer. For example, motion blurring in digital games allows the frame rate to be lowered, while the human perception of motion remains unaffected. This would be the equivalent of introducing shades of gray into the black–white flicker.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.