Dennis Gabor and the Hologram Theory

When questioned about the future, we often respond with painted Hollywood-influenced concepts and inventions. However, we have yet to truly understand the impact of thought and how often these made-up scenarios influence young scientists and engineers worldwide. One truly futuristic concept of the past that has yet to develop into a mainstream household commodity is the hologram. Having been first proposed in 1947 by Dennis Gabor, a Hungarian electrical engineer and physicist, the hologram continues to perplex modern technology companies with its physical makeup and complex function. Gabor invented the method of storing on photographic film three-dimensional (3-D) images of the information pattern encoded on a beam of light. He later became the recipient of the 1971 Nobel Prize in Physics for his outstanding Hologram Theory and the development of holograms.

The term hologram derives from the combination of the Greek words holos, meaning whole, and gramma, meaning anything written. In essence, a hologram or electromagnetic energy hologram is defined as the whole, holos, 3-D message contained or written in a single beam of light, gramma – this compares to the partial message that’s contained in a two-dimensional (2-D) photograph and is the major feat which distinguishes 2-D imaging from 3-D. However, technology wasn’t yet able to produce such mechanisms when Gabor first came up with his Hologram Theory in 1948 and it wasn’t until the invention of the laser in 1960 that experiments with this theory were able to commence.

Processes within holography share similarity to those within photography however there are major differing factors. As a whole, holography is a two-stage process. The first stage consists of recording a hologram in the form of an interference pattern or pattern showing the interaction of waves that are coherent with one another due to the two waves functioning at the same frequency. This feat is similar to the iridescent pattern or the mixing of colors that is visible to the naked eye in floating soap bubbles or atop an oil film. The second stage of holography requires the hologram to act as a diffraction grating or medium that splits, or diffracts, light into multiple beams that travel in different directions while the image of the subject is then reconstructed in order to display the final holographic image; the diffraction process gives each beam the ability to reconstruct the entire object. During this process, both intensity as well as the phase of a light wave are being recorded thus producing a 3-D picture of an object that can be viewed from several angles. An example of a modern technology that repeatedly exercises the benefits of holography lies within the medical field and is termed x-radiation or the x-ray.

Meanwhile, conventional photography uses a process that is much more simplified. Unlike holography, light intensity is the only medium that is recorded thus leading a photographing device to produce 2-D figures of an object rather than the aforementioned 3-D. This is also why figures within photographs can only be viewed from one angle at a time rather than showing a full 360-degree image.
Both of these image producing mediums, photography and holography, are light dependent methods of storing information. This means that both methods record information contained in the visible light region, the optical region, of the electromagnetic radiation spectrum and that lasers are imperative in the advanced processes of these functions. As a medium itself, the laser is an indispensable coherent light source that makes both conventional photography and optical holography possible.

An interesting fact about the hologram is that the general concept plays into the famous Holographic Principle which was first introduced by the Dutch theoretical physicist Gerardus’t Hooft and Leonard Susskind circa the 1980s. This principle states that 3-D spaces can be mathematically reduced to 2-D projections and that the 3-D universe we continuously experience and perceive is merely the “image” of a 2-D one. Many physicists strongly believe that the universe is simply one giant hologram. Even though we have come to see the importance in certain mediums that use holography, such as x-rays, we still question the importance of individual holograms. As unfamiliar with the concept as many consumers may be, this sub-field of laser technology may slowly start to find its place within our everyday lives as device-based learning begins to flourish.

With smartphone technology developing at such a rapid pace, it’s safe to assume that these hand-held devices may begin to experience changes within their hardware systems. Samsung has since advertised a smartphone prototype which displays holographic images “mid-air” via the cell-phone screen.

Young scientists and engineers worldwide are becoming more and more interested with holograms and all that is certain is the fact that major changes with this specific use of technology are currently underway and may even come to stare us in the face within the next five to ten years.

Throwback Thursday: The Birth of a Bose-Einstein condensate

Twenty-one years ago, researchers at University of Colorado put the predictions of Satyendra Nath Bose and Albert Einstein to the test, resulting in the first gaseous condensate.

Known now as a Bose-Einstein Condensate, the state of matter refers to a diluted gas of bosons, which are cooled to temperatures close to absolute zero. When exposed to this temperature, the majority of the bosons are in the lowest possible quantum state. Once in this state, the bosons show quantum qualities at the macroscopic, rather than atomic level. This behavior is known as macroscopic quantum phenomena.

June 5th, 1995 saw the birth of the very first pure Bose-Einstein Condensate. Using the diluted vapors of nearly 2,000 rubidium-87 atoms, researchers Eric Cornell, Carl Wieman, and staff cooled the atoms, using a combination of lasers and a process known as magnetic evaporative cooling.

Compared to other states of matter, the Bose-Einstein Condensate is fairly fragile. Disruptions to the surrounding environment can affect the temperature of the condensate, bringing it to a standard gaseous state. That is not to say that the Bose-Einstein condensate is too unstable for practical research, but rather that it has opened many doors into theoretical and experimental research in physical properties, and beyond.

Since its origin in the mid-1990’s, the Bose-Einstein Condensate has been used to slow light pulses to low speeds. Others are using it as a way to model black holes to study their properties, in an observable environment. Using an optical lattice, or “the interference of counter-propagating laser beams,” allows researchers to observe the Bose-Einstein condensate in less than three dimensions.

In recent years, researchers in the emerging field of atomtronics utilize the concepts of the Bose-Einstein Condensate to manipulate groups of identical atoms using lasers. Atomtronics is defined as the “creation of atomic analogues of electronic components.” In layman terms, atomtronics utilize super-cooled atoms to, theoretically, replace traditional analogues found in the electronics we use every day. The flow of the condensate is similar to that of an electric current, priming it as a potential successor to traditional electronics.

The Boss-Einstein Condensate was theorized nearly a century ago. Two decades have passed since the theories were first put to the test. Today, the once-theoretical state of matter is used, often hand in hand with laser cooling, to challenge what we know about the study of physics and beyond.  While atomtronics will not be replacing our electronic devices anytime soon, it is a study worth noting as many seek alternatives to our current energy consumption.

Using Lasers to Detect Mutant Bacteria

Using lasers to detect harmful bacteria is a topic of interest to more than a few researchers. Previously, we highlighted developments where lasers were used to identify contaminated food, which may be used to stop food poisoning long before contaminated food hits a dinner plate. Now, researchers at Purdue University have developed a laser tool that not only detects harmful bacteria, it also recognizes mutated strains.

The tool, known as bacteria rapid detection using optical scatter technology or BARDOT, works similar to the TDLAS method developed at the Institute of Information Optics, Zhejiang Normal University, Jinhua, China. BARDOT scans colonies of bacteria, revealing patterns created the bacterium. Each type of bacteria has a unique “scatter pattern” which is used to help identify the strain, against known scatter patterns. Like TDLAS, bacterium such as salmonella, E. coli, and listeria can be identified in a short period of time.

What separates BARDOT from TDLAS and other bacteria-detecting scans? According to researchers Arun Bhunia and Atul Singh, BARDOT is also able to detect genetic mutations in listeria, and at the same rate that it detects other strains of bacteria. Even more intriguing is the fact that BARDOT can detect the mutations faster than scientists can physically analyse the bacteria, themselves. Where traditional analysis of mutated bacteria takes a handful of days, BARDOT takes hours to “read” a bacterial scatter pattern.

The researchers tested BARDOT by allowing it to analyse a regular, wild listeria pattern, then deleting a gene in the system. BARDOT was able to recognize the system with the deleted gene as a listeria pattern. When the original gene was replaced, BARDOT still recognized it as the initial type of bacteria, despite the major genetic change, or mutation.

Where TDLAS has already been tested and proven effective on biological surfaces, Bhunia plans to build a larger library of known bacterial patterns before testing BARDOT as a way to detect food contamination. The details of the findings can be found here, through the American Society of Microbiology.

The Future of Fiber Optics

You wake up in the morning and the first thing you do is grab your cell-phone. You check your notifications and may even wonder why “this is the norm”. With the internet becoming more and more accessible and used more-often-than-not, it’s safe to believe that the internet isn’t leaving us anytime soon. Yet, the question still remains: where did the internet come from? The internet and the function of light-based signals can all be traced back to one major feat: fiber optics. Fiber optics, or optical fibers, are the components that helped create the internet and make international communications possible.

More than half a century ago, fiber optics were invented solely for medical and military purposes. These purposes lie within the subcategory of imaging. Years later, the invention of the laser eventually led to the major use of fiber optics within telecommunications due the discovery that these fibers weren’t majorly affected by air movement and other environmental factors such as fog, haze, and weather. Fiber optics are also able to carry hundreds of gigabits per second through the use of advanced modulation, or the action of adding information to a carrier signal, thus making them more reliable when it comes to more efficient bandwidth. Advances in technology have enabled even more data to be conveyed through a single optical fiber over long distances.

Scientists have since coined this period in time as the “big-data era” due to daily video streaming and dedicated use of hand-held device and computer applications. The need of bandwidth will only continue to grow. Since their first integration, fiber optics have changed the way society communicates. Even so communication is still evolving as we know it and actively creating a difference in the average consumer’s lifestyle.

And yet, one question still remains: what will we do with all this bandwidth? We’ve stepped into this big-data era and have since become equally inspired and enthralled when it comes to developing the devices of tomorrow as well as bettering present communication tools. These fibers have since helped us learn about the rest of the world through the urge of globalization while helping us create a more understanding and knowledgeable society through the spreading of information. The future of the next decade thrives on the use of optical fiber networks.

The Invention of the Bar Code Scanner

Supermarkets and warehouses across the globe are participating in the ever-growing game of bar code scanning. Bar codes are used more often than not and for most items that you typically surround yourself with. Any item that is bought within a major grocery store, online, or go through warehouse processing will undergo some sort of scanning process in order to keep managers and those in charge informed of either an increase or decrease in a given numerical value associated with inventory.

A bar code, or universal product code (UPC), is a quick and efficient way of entering numerical data into a computer. These codes are used in both supermarkets and warehouses alike and provide a less than stressful process when counting and keeping track of various inventory classifications. If you’re an avid shopper or at least familiar with the labels affixed to a given item then you are aware of the strange appearance bar codes share. On the surface, these hidden codes appear to be nothing but vertical white and black lines. However, when analyzing these intricate codes from a closer level we are able to understand exactly how they function.

To further explain, these vertical lines, or codes, represent specific bits of information that is then scanned by a laser before being transmitted and interpreted by a computer. The black vertical lines of a given bar code do not reflect laser light very well therefore each line is read as a one, 1, by a computer system while the white vertical lines reflect laser light extremely well and are read as a zero, 0. Each section of a bar code is further divided into seven vertical modules that consist of individual bars and spaces. Each group of these seven bars and spaces is then interpreted by a computer as being one single number. As an example, the number one is represented as “0 0 1 1 0 0 1” or “space, space, bar, bar, space, space, bar”. Collectively, the numbers on the right hand side of the bar code are the optical opposites of those on the left hand side. For example, the opposite of the number “1” (which was “0 0 1 1 0 0 1” or “space, space, bar, bar, space, space, bar”) is recognized as “bar, bar, space, space, bar, bar, space” or “1 1 0 0 1 1 0”.

A single bar code represents a twelve digit number. These numbers represent many things such as: the product type (first digit), the manufacturer code (the next five digits that make up the left half of the set of digits), product code (the following five digits that make up the right half of the set of digits), and the check digit (the very last number). This number is often located directly below the bar code of any given store-bought item and is often shown numerically as a precaution in case the vertical bar code were to become unreadable. This coding is necessary in order to make sure that the wrong information isn’t being translated into the computer system thus labeling a specific item as another.

Taking a step back in history allows us to analyze the bar code scanner and determine how exactly it came to be. It was in 1932 when a business student named Wallace Flint first proposed a system which advocated the use of punch cards in order to enable shoppers to get “checked out” in a more timely manner. Flint suggested the use of punch cards by store grocers in order to deal with the influx of customers that dominated heavily populated regions around the world. The concept of using punch cards was first developed in 1890 and had been initially used for the U.S. census as a way to keep track of United States citizens. Flint believed that this system could be used in order to give store management a well-kept record of what was being bought, however there were a few problems with this method. The equipment needed for card-reading was equally both bulky and expensive so nothing came out of this initial proposal. However, Flint’s consumer concerns would later pave the way for modern scanning technology.

The first step towards creating the bar code took place in 1948 when a graduate student named Barnard Silver overheard a conversation in the halls of Philadelphia’s Drexel Institute of Technology. The conversation happened between the president of a food chain and one of the school deans. The president was concerned about store checkout speeds and wanted to conduct research on a system which would capture specific product information ‘automatically’. The president’s request was denied by the dean however this didn’t stop Silver from mentioning what he overheard to one of his friends, Norman Joseph Woodland. Having no idea of the influence he would later have on these scanning devices, Woodland then became interested in the concept and immediately took to the books.

Woodland’s first idea was to use pink patterns that would glow under ultraviolet light. To complete this task, Silver and Woodland joined forces to build a device that would be used to test this ultraviolet concept. In the end, the project was extremely successful however the men experienced a few problems such as ink instability and financial concerns which were associated with the costs of pattern-printing. After dedicating several months to hard-work and sleepless nights, Woodland finally came up with the linear bar code. This linear code was created through Woodland’s meshing of two previously established technologies: the sound system and Morse code.

This linear bar code made use out of the Lee de Forest’s sound system from the early 1920’s. Lee de Forest was an American inventor and Electrical Engineer who has since been credited with developing the Audion, an audio vacuum-tube device that helped AT&T establish coast-to-coast signals and one that is still used in modern televisions, radios, and other sound systems. Soon after Woodland began putting together a patent application at Drexel, Silver investigated what form the new codes should take. On October 20, 1949, Woodland and Silver filed their official patent application.

In 1951, Woodland landed a job with International Business Machines (IBM) and set out with his partner, Silver, the following year in order to begin building the first bar code reader. This initial device had been the size of a desk and needed to be wrapped in dark oilcloth in order to keep out excess light. This device relied on two key components: a five-hundred-watt incandescent bulb as its light source and an RCA 935 photo-multiplier tube, a device used for the light detection of very weak signals that was initially designed for movie sound systems, as the reader. A year later, in October 1952, the patent of Woodland and Silver had been granted. After a failed attempt at trying to persuade IBM to hire a consultant to evaluate bar codes, the pair’s patent had been on the verge of expiration. However, IBM did attempt to buy the patent but to no avail. PHILCO, an electronics company, bought the patent in 1962 and then later sold it to RCA in 1971. Since then, the bar code scanner has undergone various changes within its own evolution.

One of the later interpretations of the bar code scanner included a system created by David J. Collins. Being a graduate from MIT, Collins had a knack for implementing new strategies in order to accomplish tricky tasks. Collins came up with a strategic system that assisted railroad companies through the automatic tracking of freight cars. Each car was designated a four-digit number that served to identify the railroad which owned it and an additional six-digit number whose purpose was to identify a specific car.

Through rigorous years of failed attempts and disregarded concepts a breakthrough had finally been made. On June 26, 1974, a supermarket in Troy, Ohio sold a pack of Juicy Fruit chewing gum which was the first item to have ever been scanned by a bar code scanner. After this historical moment the use of scanners slowly started to climb. It wasn’t until the late 1970s when sales of these systems started to flourish. The invention of the bar code scanner has since made things much more convenient for the average shopping consumer. There is no longer a need for cashiers and store clerks to manually record transactions. Even though wait lines still take quite a bit of time to get through they aren’t as bad as they have been in the past.

Today, small businesses are able to thrive by keeping lists of their inventory. Meanwhile, larger stores require more extensive lists that are highly ordered and organized in order to keep count of brands and “stock keeping units” or SKUs. These stores and major companies are the ones that find comfort and daily use in bar codes and scanners and keep the technology alive and thriving. When a store manager or owner needs to acquire information on a certain product, all they have to do is scan its bar code.

We know where bar codes have brought us but we have yet to comprehend where exactly it will lead us. The future of bar codes may even include DNA bar coding. The International Bar code of Life (IBL) is a project that is currently underway and one that aims to compile a catalog of all species that inhabit the Earth. As of late, researchers have already begun using bar codes to scan mating habits of insects. This only further proves that new inventions and technologies are forever enhancing and expanding both convenience and experience.