Official Blog

Smart Note Taker

Smart Note Taker :-

The Smart Note Taker is such a helpful product that satisfies the needs of the people in today’s technological and fast life. This product can be used in many ways. The Smart NoteTaker provides taking fast and easy notes to people who are busy with something else. With the help of Smart NoteTaker, people will be able to write notes on the air, while being busy with their work

The written note will be stored on the memory chip of the pen and will be able to read in digital medium after the job has done. This will save time and ease life. The smart note taker is good and helpful for blinds that think and write freely. Another place, where this product can play an important role, is where two people talks on the phone. The subscribers are apart from each other while their talk and they may want to use figures or texts to understand themselves better. It’s also useful especially for instructions in presentations.

The instructors may not want to present the lecture in front of the board. The drawn figure can be processed and directly sent to the server computer in the room. The server computer then can broadcast the drawn shape through network to all of the computers which are present in the room. By this way, the lectures are aimed to be more efficient and fun. This product will be simple but powerful. The product will be able to sense 3D shapes and motions that user tries to draw. The sensed information will be processed and transferred to the memory chip and then will be monitored on the display device. The drawn shape then can be broadcasted to the network or sent to a mobile device.

“Technology is the best when it brings people together”

Technical Definition of Smart Note Taker:-

In order to meet the technical requirements of the product we need Operating System Like Windows or Linux in order to implement software part of the project, Displacement Sensors to recognize the displacement of the pen in three dimensions, parallel cable to communicate with computer, software to solve the displacement data and finds the individual coordinate displacements in three axes and transform the data into text format, analog to digital converter to process analog displacement data and convert them into digital format, switch to control the pen and Rechargeable battery.

  • Analog to digital converter
  • Software program to convert data into text or string format
  • Operating System ??Parallel cable
  • Switch
  • Rechargeable battery
  • Displacement Sensor

“It’s Not faith in technology, It’s Faith in People

Note Taker for PC:-

PC Notes Taker is the world’s first device that captures natural handwriting on any surface onto a PC in real time. Based on a revolutionary electronic pen, PC Notes Taker displays the user’s handwritten notes, memos or drawings on the computer, and stores the image for future use. PC Notes Taker is ideal for markets where the handwritten input is essential, such as health, educational and financial sectors. Supplied with user-friendly software, PC Notes Taker is compatible with PCs and notebooks.

Adds Handwriting Input to any Computer PC Notes Taker is the world’s first device that captures natural handwriting on any surface onto a PC in real time. Based on a revolutionary electronic pen, PC Notes Taker displays the user’s handwritten notes, memos or drawings on the computer, and stores the image for future use. PC Notes Taker is ideal for markets where the handwritten input is essential, such as health, educational and financial sectors. Supplied with user-friendly software, PC Notes Taker is compatible with PCs and notebooks.

Features:-

Capture of handwriting from any plain paper or other writing surface Input of continuous writing up to A4 page size Insert sketches, signatures, equations, and notes into Word documents E-mail sketches or handwritten notes in any language using MS OUTLOOK Convert handwriting to digital text using MS word recognition engine Annotate, add comments, edit and draw in your own handwriting onto MS office documents Create instant messaging using ICQ

The Smart Pen system includes the Smart Pen and a pen cradle connected to an internet-enabled computer. As CRFs are filled out, the Smart Pen records each stroke. It identifies each CRF and where it is on the page through a very fine grid pattern that appears as a light gray background shading on the CRF. The Pen is then placed in the cradle, activating a password-protected Internet link to Health Decisions. Data are interpreted into fields and validation can occur immediately, with queries returned to sites quickly over the Internet. The process also creates an exact copy of the original CRF that can be read for notation and comparison with interpreted data fields. Health Decisions takes the best technology and applies it to your clinical trials.

“In Technology Whatever can be done will be done”

Note Taker for mobile:-

The Ultimate Handwriting Capture Device Mobile NoteTakerTM is the worlds first portable handwriting capture device based on natural handwriting as an input. Attach plain paper of any kind and use the Pegasus electronic pen to capture, store and share handwritten drawings, sketches, notes, and memos at meetings, lectures, and conferences.

Mobile NoteTakerTM has a built-in LCD to confirm input. The on-board flash memory can store up to 50 pages.

Features:-

Uses standard paper – no special paper required Stores up to 50 A4 pages Includes LCD to view and confirm input Operates both in mobile mode and when connected to PC, notebook or other device Connects to PC/Notebook via USB cable (included) Includes software for synchronization and management of stored files Writes directly into MS Office applications (in Connected mode) Allows file transfer over LAN, email, and instant messaging application (in connected mode).

Capture, Organize, and Share Your Notes Digitally-Anywhere, Anytime!
Mobile Mode Enables capture and storage of notes and sketches digitally at meetings, lectures, and conferences.

“Because People who are crazy enough to think they can change the world are the ones who do”

Connected Mode

Synchronizes the Mobile NoteTakerTM and a PC/Notebook via USB cable (included). You can upload, organize, move, edit or add to handwritten notes, ideas, sketches, phone numbers, or reminders. The included software also enables memos, notes, and sketches to be sent via e-mail or over the LAN network. It is also possible to write directly into MS Word or Outlook, and add a personal touch to ICQ instant messages. Based on Pegasus successful PC Notes Taker, Mobile NoteTakerTM is the ultimate handwriting capture device. Everything you need to get started is right in the box. Even if you dont have standard size paper or piece of paper with you-you can use anything – an envelope, an old receipt, a tear-off from a paper bag and best of all in your own natural and writing.

As long as you have the Mobile NoteTakerTM, you can jot down your most inspired ideas and be sure that you will never lose them again.

Pill Camera

What is wireless capsule enteroscopy (Pill Camera)?

Wireless capsule enteroscopy also known as pill cam is a relatively new method of diagnosing diseases within the?small intestine.

For detecting diagnosis a pill-sized video capsule is swallowed, which slowly travels through your intestine before being naturally excreted. The capsule has its own built-in light and camera to take pictures of the walls of the intestine and detect bleeds or small intestine tumors, ulcers or abnormal vascular masses. 2-4 images are taken per second for up to 8 hours. The images are transmitted to a recorder that is worn around the waist.

You may require this investigation if you have abnormal bleeding or are suspected to have Crohn’s disease. This investigation is non-invasive and allows doctors to examine all three portions of the small intestine

  • Duodenum,
  • Jejunum and
  • Ileum

which often cannot be reached by other imaging methods.
What is Capsule Endoscopy used for?

Capsule enteroscopy allows your doctor to visualize the small intestine which is often missed by conventional imaging methods such as upper gastrointestinal endoscopy or colonoscopy. The most common reason for ordering this investigation is to look for sources of bleeding. You may have noticed blood in yo ur vomit or faeces, or have unnoticed blood loss that can cause iron deficiency anaemia. This investigation can identify polyps, inflammatory intestine disease (Crohn’s disease or?ulcerative colitis), ulcers and tumors that may be the source of the bleeding. Such lesions may not have been found by previous investigations but once identified; your doctor can decide an appropriate course of management.

The other main use is for evaluating the extent of Crohn’s disease, which commonly affects the small intestine. This investigation is particularly useful for detecting early disease which may be missed by barium examination and CT. Also, it can be useful in patients who have symptoms which do not match the extent of disease (if any) seen by conventional imaging techniques.

Wireless capsule enteroscopy may have further indications in the future as it is safe, easy to perform, non-invasive and doesn’t require sedation. In particular, if techniques are invented that allow treatments or biopsies (tissue sampling) to be performed at the time of the procedure, it will become a very useful procedure.

It should be noted that this investigation may not always be the best for you. The images taken by the camera are of poorer quality than those obtained by upper gastrointestinal endoscopy or colonoscopy. In addition, the camera may move too fast or slow to examine or the appropriate areas or be facing the wrong way and miss some lesions. Thus, it is only one possibility?of a series of investigations you may require in order for your doctor to make a correct diagnosis.
How do you prepare for the procedure?

Prior to the procedure, your doctor will explain what the procedure involves, risks, benefits and why it is indicated in your case. You will need to fast (don’t eat or drink) for around 10 hours before the investigation, as it is safest and produces the best results when the intestine is empty.

No fluid is taken for at least 2 hours and no food for a further 4 hours after swallowing the capsule. Oral medications can be taken after 2 hours if required.

If you are diabetic, the medication must be stopped during the fasting period and insulin use should be discussed with your doctor.

There is usually no need for intestine preparation, but strong colored tablets (e.g. iron tablets) should be avoided for about 24hrs before the procedure.
What does the procedure involve?

For the procedure, you will be required to swallow a 26x11mm endoscopy capsule equipped with a video camera, light source, radio transmitter, and batteries. The capsule is swallowed with a substance called simethicone, to prevent bubbles forming and interfering with the images.

The capsule passes naturally through your body via peristalsis (contraction and propelled by the muscular walls of the gastrointestinal tract) while it takes many images. The images are detected by a sensor device attached to your abdomen with 8 small aerials taped to the skin (similar to the electrodes used for an ECG). These are recorded and stored on the image recorder worn around your waist.

After approximately 8 hours you will be required to return to the medical center so the images can be downloaded and examined on a computer monitor by the physician. Around 50,000 images will be taken per investigation, so it can take a long time for these to be examined and processed. Your doctor will normally inform you of the results within a week.

You should pass the capsule naturally after 8-72 hours, but it can sometimes take up to two weeks. If you haven’t passed the capsule by this time, an X-ray may need to be performed to see if it is still present or obstructed. The capsule is discarded after it has been excreted.
What are the risks?

Capsule enteroscopy is a safe procedure and is well tolerated by most patients. Less than 1 in 10 people have difficulty swallowing the capsule, which has a gel coating to help you swallow it.

The main risk is retention of the capsule, which occurs in about 1 in a hundred people. The capsule becomes impeded by a stricture (narrowing) secondary to a tumor, inflammation or scarring from previous surgery. It is not dangerous in the short term, but you may require a surgical intervention to remove it. Obstruction may present as bloating, vomiting or pain. You should consult your doctor promptly if you experience these symptoms. In most cases, the capsule will pass naturally from the body without any problems.

 

Generic Visual Perception Processor

By Author – Ashish Kasture

 

The Generic visual perception processor is a single chip modeled on the perception capabilities of the human brain, which can detect objects in a motion video signal and then locate and track them in real time. Imitating the human eyes neural networks and the brain. This chip can handle about 20 billion instructions per second. This electronic eye on the chip can handle a task that ranges from sensing the variable parameters as in the form of video signals and then process it for co-Generic visual perception processor is a single chip modeled on the perception capabilities of the human brain, which can detect objects in a motion video signal and then locate and track them in real time. Imitating the human eyes neural networks and the brain, the chip can handle about 20 billion instructions per second.
Generic Visual Perception Processor can automatically detect objects and track their movement in real-time. The GVPP, which crunches 20 billion instructions per second (BIPS), models the human perceptual process at the hardware level by mimicking the separate temporal and spatial functions of the eye-to-brain system. The processor sees its environment as a stream of histograms regarding the location and velocity of objects. GVPP has been demonstrated as capable of learning-in-place to solve a variety of pattern recognition problems. It boasts automatic normalization for varying object size, orientation and lighting conditions, and can function in daylight or darkness. This electronic “eye” on a chip can now handle most tasks that a normal human eye can.
That includes driving safely, selecting ripe fruits, reading and recognizing things. Sadly, though modeled on the visual perception capabilities of the human brain, the chip is not really a medical marvel, poised to cure the blind. The GVPP tracks an “object,” defined as a certain set of hue, luminance and saturation values in a specific shape, from frame to frame in a video stream by anticipating where it’s leading and trailing edges make “differences” with the background. That means it can track an object through varying light sources or changes in size, as when an object gets closer to the viewer or moves farther away. The GVPP’S major performance strength over current-day vision systems is its adaptation to varying light conditions. Today’s vision systems dictate uniform shadowless illumination and even next-generation prototype systems, designed to work under normal lighting conditions, can be used only dawn to dusk.

The GVPP on the other hand, adapt to real-time changes in lighting without recalibration, day or light. For many decades the field of computing has been trapped by the limitations of the traditional processors. Many futuristic technologies have been bound by limitations of these processors. These limitations stemmed from the basic architecture of these processors. Traditional processors work by slicing each and every complex program into simple tasks that a processor could execute. This requires an existence of an algorithm for the solution of the particular problem. But there are many situations where there is an inexistence of an algorithm or inability of a human to understand the algorithm.
Even in these extreme cases, GVPP performs well. It can solve a problem with its neural learning function. Neural networks are extremely faulted tolerant. By their design even if a group of neurons gets, the neural network only suffers a smooth degradation of the performance. It won’t abruptly fail to work. This is a crucial difference, from traditional processors as they fail to work even if a few components are damaged. GVPP recognizes stores, matches and process patterns. Even if the pattern is not recognizable to a human programmer in input the neural network, it will dig it out from the input. Thus GVPP becomes an efficient tool for applications like the pattern matching and recognition last references are given.

What is Security Trend?

By Author -Rashmita Soge

 

Looking ahead, a number of emerging IT security advances will arm organizations with the right information at the right time to help spot and mitigate potential breaches before they can occur. Here, in no particular order, Collaborate, contribute, consume and create knowledge about todays top security trends, help to identify security issues that are relevant and emerging as well as issues that need more guidance.
Overall, security trends will closely follow technical trends for a particular year. If AI(artificial intelligence), IoT(internet of things), Data privacy are said to be a game changer in technical industry for 2018.
Here are the key technology trends for 2018 that are anticipated to have an even greater impact on businesses and government in the relentless pursuit to increase efficiencies and enhance connectivity.
Internet of Things (IoT):
Internet-enabled devices are expected to continue their momentum in 2018 and connect even more everyday objects. IoT devices present some very serious security issues. Most IoT devices were never built to withstand even very simple cyber-attacks. Most IoT devices are unable to be patched or upgraded, and will, therefore, remain vulnerable to cyber hacks or breaches.
If compromised, IoT devices with security vulnerabilities can result in a range of issues, such as a simple denial of service, privacy compromises, major infrastructure failures, or even death. There are well-known methods of improving the security of IoT devices, such as implementing additional protection steps and processes, but these have other drawbacks, such as higher costs, and user inconvenience. Government regulation is needed to set national frameworks in place to ensure devices have minimum standards of protection.
Artificial Intelligence (AI):
Many vendors have jumped onto the AI bandwagon in order to build “smarter” systems that can detect and act on security threats, either before or very early after the information has been compromised. I expect to see this continue in 2018, with computers becoming more intuitive and defensive.This presents a strong opportunity for developers, with a growing demand for systems to be increasingly intelligent and alert to potential cyber risks.
Just as AI has the potential to boost productivity for businesses and government, hackers will look to AI to find vulnerabilities in software, with machine-like efficiency, to hack into systems in a fraction of the time it would take a human being. This and other scenarios make AI one of the key technology trends to watch and mitigate risk for in 2018.
Cryptocurrency:
Cryptocurrency such as Bitcoin has come on to the global agenda in 2017, and with it surging in value, it begs the question of how can owners ensure it is protected and legitimate?
Current cryptocurrency systems have significant issues with scale and performance and may be susceptible to quantum computer attacks in the future. Cryptocurrency systems need to evolve to overcome these issues, and investors in cryptocurrency need to place more emphasis on the security strategies and systems in place at their providers or risk losing their capital overnight.
Cloud Computing:
Many high-profile breaches in recent years have demonstrated the vulnerabilities of cloud computing and how it continues to be a significant issue. The ongoing question of how businesses and individuals can manage their data remotely while ensuring its protected will continue to resonate in 2018.
While many users will judge that the risk of data compromise does not warrant local control of information when compared with the benefits of convenience and low price of cloud services, a growing number of potential users will come to realize that for them, the risk of data compromise may be too high. This will be particularly true for government agencies, defense, intelligence, banking and finance, and legal services.
Data Privacy:
Data privacy also called information privacy, is the aspect of information technology (IT) that deals with the ability an organization or individual has to determine what data in a computer system can be shared with third parties. Data privacy continues to be a lost issue with every new device monitoring our conversation, location, likes, dislikes. There is a huge electronic virtual dictionary being built on us with the digital footprint that we are constantly leaving. This will continue into 2018 and beyond.

What is Hawk-Eye?

By Author – Rishabh Sontakke

 

Hawk-Eye is a computer system used in numerous sports such as cricket, tennis, Gaelic football, badminton, hurling, Rugby Union, association football, and volleyball, to visually track the trajectory of the ball and display a profile of its statistically most likely path as a moving image.
The Sony-owned Hawk-Eye system was developed in the United Kingdom by Paul Hawkins. The system was originally implemented in 2001 for television purposes in cricket. The system works via six (sometimes seven) high-performance cameras, normally positioned on the underside of the stadium roof, which track the ball from different angles. The video from the six cameras is then triangulated and combined to create a three-dimensional representation of the ball’s trajectory. Hawk-Eye is not infallible, but is accurate to within 3.6 millimeters and generally trusted as an impartial second opinion in sports.
It has been accepted by governing bodies in tennis, cricket, and association football as a means of adjudication. Hawk-Eye is used for the Challenge System since 2006 in tennis and Umpire Decision Review System in cricket since 2009. The system was rolled out for the 2013-14 Premier League season as a means of goal-line technology. In December 2014 the clubs of the first division of Bundesliga decided to adopt this system for the 2015-16 season.

How does it work?

The whole setup involves six high-speed vision processing cameras along with two broadcast cameras. When a delivery is bowled, the position of the ball recorded in each camera is combined to form a virtual 3D positioning of the ball after its being delivered. The whole process of the delivery is broken into two parts, delivery to bounce and bounce to impact. Multiple frames of the ball position are measured and through this, you can calculate the direction, speed, swing, and dip of that specific delivery.

Deployment in sports

  1. Cricket:
    The technology was first used by Channel 4 during a Test match between England and Pakistan on Lord’s Cricket Ground, on 21 May 2001. It is used primarily by the majority of television networks to track the trajectory of balls in flight. In the winter season of 2008/2009, the ICC trialed a referral system where Hawk-Eye was used for referring decisions to the third umpire if a team disagreed with an LBW decision. The third umpire was able to look at what the ball actually did up to the point when it hit the batsman, but could not look at the predicted flight of the ball after it hit the batsman.
    Its major use in cricket broadcasting is in analyzing leg before wicket decisions, where the likely path of the ball can be projected forward, through the batsmans legs, to see if it would have hit the stumps. Consultation with the third umpire, for conventional slow motion or Hawk-Eye, on the leg before wicket decisions, is currently sanctioned in international cricket even though doubts remain about its accuracy.
    The Hawk-Eye referral for an LBW decision is based on three criteria:
    ? Where the ball pitched.
    ? The location of impact with the leg of the batsman.
    ? The projected path of the ball past the batsman.
    In all three cases, marginal calls result in the on-field call being maintained.
    Due to its real-time coverage of bowling speed, the systems are also used to show delivery patterns of a bowler’s behavior such as line and length, or swing/turn information. At the end of an over, all six deliveries are often shown simultaneously to show a bowler’s variations, such as slower deliveries, bouncers, and leg-cutters. A complete record of a bowler can also be shown over the course of a match.
    Batsmen also benefit from the analysis of Hawk-Eye, as a record can be brought up of the deliveries from which a batsman scored. These are often shown as a 2-D silhouetted figure of batsmen and color-coded dots of the balls faced by the batsman. Information such as the exact spot where the ball pitches or speed of the ball from the bowler’s hand (to gauge batsman reaction time) can also help in post-match analysis.
  2. Tennis:
    In late 2005 Hawk-Eye was tested by the International Tennis Federation (ITF) in New York City and was passed for professional use. Hawk-Eye reported that the New York tests involved 80 shots being measured by the ITF’s high-speed camera, a device similar to MacCAM. During an early test of the system at an exhibition tennis tournament in Australia (seen on local TV), there was an instance when the tennis ball was shown as “Out”, but the accompanying word was “In”. This was explained to be an error in the way the tennis ball was shown on the graphical display as a circle, rather than as an ellipse. This was immediately corrected.
    Hawk-Eye has been used in television coverage of several major tennis tournaments, including Wimbledon, the Queen’s Club Championships, the Australian Open, the Davis Cup and the Tennis Masters Cup. The US Open Tennis Championship announced they would make official use of the technology for the 2006 US Open where each player receives two challenges per set. It is also used as part of a larger tennis simulation implemented by IBM called PointTracker.
    The 2006 Hopman Cup in Perth, Western Australia, was the first elite-level tennis tournament where players were allowed to challenge point-ending line calls, which were then reviewed by the referees using Hawk-Eye technology. It used 10 cameras feeding information about ball position to the computers. Jamea Jackson was the first player to challenge a call using the system.
    In March 2006, at the Nasdaq-100 Open in Miami, Hawk-Eye was used officially for the first time at a tennis tour event. Later that year, the US Open became the first grand-slam event to use the system during play, allowing players to challenge line calls.
    The 2007 Australian Open was the first grand-slam tournament of 2007 to implement Hawk-Eye in challenges to line calls, where each tennis player in Rod Laver Arena was allowed two incorrect challenges per set and one additional challenge should a tiebreaker be played. In the event of an advantage final set, challenges were reset to two for each player every 12 games, i.e. 6 all, 12 all. Controversies followed the event as at times Hawk-Eye produced erroneous output. In 2008, tennis players were allowed three incorrect challenges per set instead. Any leftover challenges did not carry over to the next set. Once, Amlie Mauresmo challenged a ball that was called in, and Hawk-Eye showed the ball was out by less than a millimeter, but the call was allowed to stand. As a result, the point was replayed and Mauresmo did not lose an incorrect challenge.
    The Hawk-Eye technology used in the 2007 Dubai Tennis Championships had some minor controversies. Defending champion Rafael Nadal accused the system of incorrectly declaring an out ball to be in following his exit. The umpire had called a ball out; when Mikhail Youzhny challenged the decision, Hawk-Eye said it was in by 3 mm. Youzhny said after that he himself thought the mark may have been wide but then offered that this kind of technology error could easily have been made by linesmen and umpires. Nadal could only shrug, saying that had this system been on clay, the mark would have clearly shown Hawk-Eye to be wrong. the area of the mark left by the ball on hard court is a portion of the total area that the ball was in contact with the court (a certain amount of pressure is required to create the mark)
    The 2007 Wimbledon Championships also implemented the Hawk-Eye system as an officiating aid on Centre Court and Court 1, and each tennis player was allowed three incorrect challenges per set. If the set produced a tiebreaker, each player was given an additional challenge. Additionally, in the event of a final set (third set in women’s or mixed matches, fifth set in men’s matches), where there is no tiebreak, each player’s number of challenges was reset to three if the game score reached 6?6, and again at 12?12. Teymuraz Gabashvili, in his first-round match against Roger Federer, made the first ever Hawk-Eye challenge on Centre Court. Additionally, during the finals of Federer against Rafael Nadal, Nadal challenged a shot which was called out. Hawk-Eye showed the ball as in, just clipping the line. The reversal agitated Federer enough for him to request (unsuccessfully) that the umpire turns off the Hawk-Eye technology for the remainder of the match.
    In the 2009 Australian Open fourth round match between Roger Federer and Tom Berdych, Berdych challenged an out call. The Hawk-Eye system was not available when he challenged, likely due to a particularly pronounced shadow on the court. As a result, the original call stood.
    In the 2009 Indian Wells Masters quarterfinals match between Iva Ljubii and Andy Murray, Murray challenged an out call. The Hawk-Eye system indicated that the ball landed in the center of the line despite instant replay images showing that the ball was clearly out. It was later revealed that the Hawk-Eye system had mistakenly picked up the second bounce, which was on the line, instead of the first bounce of the ball. Immediately after the match, Murray apologized to Ljubicic for the call and acknowledged that the point was out.
    The Hawk-Eye system was developed as a replay system, originally for TV broadcast coverage. As such, it initially could not call-ins and outs live.
    The Hawk-Eye Innovations website states that the system performs with an average error of 3.6 mm. The standard diameter of a tennis ball is 67 mm, equating to a 5% error relative to ball diameter. This is roughly equivalent to the fluff on the ball.
    Currently, only clay court tournaments, notably the French Open is the only Grand Slam, are found to be generally free of Hawk-Eye technology due to marks left on the clay where the ball bounced to evidence a disputed line call. Chair umpires are then required to get out of their seat and examine the mark on the court with the player by his side to discuss the chair umpire’s decision.

Unification of Rules

Until March 2008, the International Tennis Federation (ITF), Association of Tennis Professionals (ATP), Women’s Tennis Association (WTA), Grand Slam Committee, and several individual tournaments had conflicting rules on how Hawk-Eye was to be utilized. A key example of this was the number of challenges a player was permitted per set, which varied among events. Some tournaments allowed players a greater margin for error, with players allowed an unlimited number of challenges over the course of a match. In other tournaments, players received two or three per set. On 19 March 2008, the aforementioned organizing bodies announced a uniform system of rules: three unsuccessful challenges per set, with an additional challenge if the set reaches a tiebreak. In an advantage set (a set with no tiebreak) players are allowed three unsuccessful challenges every 12 games. The next scheduled event on the men and women’s tour, the 2008 Sony Ericsson Open, was the first event to implement these new, standardized rules.

    1. Association football
      Hawk-Eye is one of the goal-line technology (GLT) systems authorized by FIFA. Hawk-Eye tracks the ball and informs the referee if a ball fully crosses the goal line into the goal. The purpose of the system is to eliminate errors in assessing if a goal was scored. The Hawk-Eye system was one of the systems trialed by the sport’s governors prior to the 2012 change to the Laws of the Game that made GLT a permanent part of the game, and it has been used in various competitions since then. GLT is not compulsory and, owing to the cost of Hawk-Eye and its competitors, systems are only deployed in a few high-level competitions.
      As of July 2017, licensed Hawk-Eye systems are installed at 96 stadiums. By the number of installations, Hawk-Eye is the most popular GLT system.Hawk-Eye is the system used in the Premier League, Bundesliga among other leagues.
    2. Snooker
      At the 2007 World Snooker Championship, the BBC used Hawk-Eye for the first time in its television coverage to show player views, particularly of potential snooker. It has also been used to demonstrate intended shots by players when the actual shot has gone awry. It is now used by the BBC at every World Championship, as well as some other major tournaments. The BBC used to use the system sporadically, for instance in the 2009 Masters at Wembley the Hawk-Eye was at most used once or twice per frame. Its usage has decreased significantly and is now only used within the World Championships and very rarely in any other tournament on the snooker tour. In contrast to tennis, Hawk-Eye is never used in snooker to assist referees’ decisions and primarily used to assist viewers in showing what the player is facing.
    3. Gaelic games
      In Ireland, Hawk-Eye was introduced for all Championship matches at Croke Park in Dublin in 2013. This followed consideration by the Gaelic Athletic Association (GAA) for its use in Gaelic football and hurling. A trial took place in Croke Park on 2 April 2011. The doubleheader featured football between Dublin and Down and hurling between Dublin and Kilkenny. Over the previous two seasons, there had been many calls for the technology to be adopted, especially from Kildare fans, who saw two high-profile decisions go against their team in important games. The GAA said it would review the issue after the 2013 Sam Maguire Cup was presented.
      Hawk-Eye’s use was intended to eliminate contentious scores. It was first used in the Championship on Saturday 1 June 2013 for the Kildare versus Offaly game, part of a doubleheader with the second game of Dublin versus Westmeath. It was used to confirm that Offaly substitute Peter Cunningham’s attempted point had gone wide 10 minutes into the second half.

Use of Hawk-Eye was suspended during the 2013 All-Ireland hurling semi-finals on 18 August due to a human error during an Under-18 hurling game between Limerick and Galway. During the minor game, Hawk-Eye ruled a point for Limerick as a miss although the graphic showed the ball passing inside the posts, causing confusion around the stadium – the referee ultimately waved the valid point wide provoking anger from fans, viewers and TV analysts covering the game live. The system was subsequently stood down for the senior game which followed, owing to “an inconsistency in the generation of a graphic”. Limerick, who was narrowly defeated after extra-time, announced they would be appealing over Hawk-Eye’s costly failure. Hawk-Eye apologized for this incident and admitted that it was a result of human error. There have been no further incidents during the GAA. The incident drew attention from the UK, where Hawk-Eye had made its debut in English football’s Premier League the day before.
Hawk-Eye was introduced to a second venue, Semple Stadium, Thurles, in 2016. There is no TV screen at Semple: instead, an electronic screen displays a green T if a score has been made, and a red Nl if the shot is wide.
It was used at a third venue, Pirc U Chaoimh, Cork, in July 2017, for the All-Ireland hurling quarter-finals between Clare versus Tipperary and Wexford versus Waterford.

  1. Australian football
    On 4 July 2013, the Australian Football League announced that they would be testing Hawk-Eye technology to be used in the Score Review process. Hawk-Eye was used for all matches played at the MCG during Round 15 of the 2013 AFL Season. The AFL also announced that Hawk-Eye was only being tested, and would not be used in any Score Reviews during the round.
  2. Badminton
    BWF introduced Hawk-Eye technology in 2014 after testing other instant review technologies for line call decision in BWF major events. Hawk-Eye’s tracking cameras are also used to provide shuttlecock speed and other insight in badminton matches. Hawk-Eye has formally introduced in 2014 India Super Series tournament.

Parasitic Computing

By Author – Samata Shelare

 

It is a generalized problem format into which more specific tasks can be mapped, this opens up the possibility of using the communication protocols provided by Internet hosts as a massive distributed computer. Whats more, the computers that participate in the endeavor are unwitting participants?from their perspective, they are merely responding (or not) to TCP traffic. Parasitic computing is especially interesting because the exploit doesnt compromise the security of the affected computers, it piggybacks a math problem onto the TCP checksum work which TCP enabled hosts to carry out under routine operating conditions.
Parasitic Computing Described:
TCP checksums are normally used to ensure data corruption hasnt occurred somewhere along a packets journey from one computer to another along what is usually a multi-hop route across the Internet. The transmitting computer adds a two-byte checksum field to the TCP header which is a function of the routing information and the data payload of the packet. The idea is that if corruption occurs in the transport or physical layers, the receiving computer will detect this because the presented checksum no longer corresponds with the data received.
Parasitic computing on TCP checksums maps a new problem set onto the TCP checksum function. In the particular instance discussed in BFJB, the technique is to compute a checksum which corresponds to an answer set for a particular boolean satisfiability problem and then to send out packets with data payloads which are potential solutions to that problem. Receiving computers will attempt to validate the checksum against the data payload, effectively checking the solution they were offered the problem under consideration. If the checksum validates, properly configured hosts will reply, and the parasitic computer knows that a solution to the problem has been found. The value of this model is that problems for which there is no known efficient algorithm can be solved by brute-force through parallelization to many hosts.
Schematic illustration of Parasitic Computing from BFJB. Our experiment implemented a modified version of the Parasitic Computing idea: we didnt rely on the HTTP layer as shown in the drawing. We modified the SYN request packet and listened for an SYN-ACK response. Using the handshake step avoids the overhead of establishing the connection beforehand, but may have introduced the false positives we discuss below.
The TCP checksum function breaks the information it checks into 16-bit words and then sums them. Whenever the sum gets larger than 0xffff, the additional column is carried back around (so 0xffff + 0x0001 = 0x0001). The 1s complement of this sum is written down as the checksum. The trick presented in BFJB is to take advantage of the correlation between a numeric sum and a boolean operation: if a and b are bits, and summing results in 2, then upon treatment as booleans a b is TRUE; similarly, if summing a + band results in a 1, then a ? b is TRUE. BFJB provides the truth table showing the relationship between these two boolean operators and the mathematical sum. This is the basis of mapping boolean satisfiability into the TCP checksum space.
In more detail, a boolean satisfiability problem asks whether an assignment of truth-values exists which will allow a given formula to evaluate to TRUE. Typically, these formulae are presented in conjunctive normal form (an AND of ORs). However, the problem exemplified in BFJB allows either ? or to appear in a clause (aside: its noteworthy that is derivable from ? and : [a b] = [a ? b] ? [a b], but more on this below). The example in BFJB is built from the specific case where there are 16 unique variables and 8 clauses.
The first step, then, in finding a solution to this problem parasitically is to calculate a checksum which is in correspondence with an assignment which solves the problem. To do this, BFJB takes each of the 8 clauses in some order and generate a checksum based on the operator involved: for every , write down 10, and for every ?, we write down 01. Then, a checksum is created by taking the 1s complement. This value is the magic checksum which corresponds to a solution to our problem.

After that packets with variable assignments as data payloads can be generated and sent to TCP enabled hosts on the Internet. Each variable is padded with a 0 and then column-aligned with its clause operator according to the ordering used for the checksum creation. The padding is crucial because it is what allows a pair of variables with the operator to sum to 10, without overflowing into the column to the left.
For example, under the assignment where x1 = 1 and x2 = 1 and x3 = 1 and x4 = 0 and x15 = 0 and x16 = 1 we generate two 16 bit words with padding and the operands lined up in columns.
0101…00
0100…01
This packet is then sent to a network host which will evaluate the checksum against the data. Because TCP_CHECKSUM(data) = magic checksum only when the data solves our problem, only those hosts which received a good solution will reply. Each logically possible assignment is generated as a packet payload and sent to a TCP enabled host for evaluation and in this way we have the Internet as a parallel distributed computing for finding solutions to our boolean formulae!
Unfortunately, this relatively high-level description leaves a fair bit of implementation detail to the reader and so we set out to fill in the gaps in order to reproduce the reported results.

HACKABALL – MOST ENGAGING UX FOR DIGITAL EDUCATION

By Author – Shubhangi Agrawal

 

Lifehack (or life-hacking) refers to any trick, shortcut, skill, or novelty method that increases productivity and efficiency, in all walks of life. The term was primarily used by computer experts who suffer from information overload or those with a playful curiosity in the ways they can accelerate their workflow in ways other than programming.

One of the most amazing life hacks is Hackaball.
Hackaball is a computer you can throw that allows children to program their own games.
Computer-related employment is expected to rise by 22% by 2020. This year, England became the first country to make computer programming a compulsory school subject, and in the U.S., organizations are lobbying for programming to be available to students in every school.
It is a device that would encourage even the youngest children to learn necessary skills that will better prepare them for an ever-more tech-focused world.
Hackaball is a smart and responsive smartphone-connected gadget designed to teach 6-10-year-olds the principles of coding through fun, physical and mental play. The ball is paired with an iOS application that lets children create their own games and program them onto the device as well as play the classics. Sensors inside the sturdy Hackaball encourage kids to experiment with sound, light, vibration, and movement. The potential for creating games is as limitless as a childs imagination.
Hackaball quickly caught the attention and captured the imagination of the public.
How does it work?
The computer inside Hackaball has sensors that detect motions like being dropped, bounced, kicked, shaken or being perfectly still. Children hack the ball with an iOS or OS X app which allows them to go in and change the behavior of Hackaball to do what they want.
The paired iOS or OS X app comes pre-loaded with several games that can be sent to Hackaball to get kids started. Once they’ve mastered these initial games, kids can create brand new ones using a simple building block interface, experimenting with Hackaball’s sounds, LED lighting effects and rumble patterns. You can install the app on as many iPads an iPhones as you like, it’s free
Hackaball grows the more they play. Rewarding kids with unlockable features, challenging them with broken games to fix and the ability to share their creations with friends.
The variety of games children make and play are limited only by their imagination.

Indian Regional Navigation Satellite System

By Author – Samata Shelare

 

India is looking forward to starting its own Navigational system. At present most of the countries are dependent on Americas Global Positioning System (GPS). India is all set to have its own navigational system named Indian Regional Navigation Satellite System (IRNSS). IRNSS is going to be fully functional by mid of 2016. IRNSS is designed to provide accurate position information throughout India. It also covers 1500 km of the region around the boundary of India. IRNSS would have 7 satellites, out of which 4 are already placed in orbit. The fully deployed IRNSS system consists of 3 satellites in GEO orbit and 4 satellites in GSO orbit, approximately 36,000 km altitude above the earth surface.

  1. Indian Regional Navigation Satellite System or IRNSS (NavIC) is designed to provide accurate real-time positioning and timing services to users in India as well as the region extending up to 1,500 km from its boundary.
  2. It is an independent regional navigation satellite system developed by India on par with US-based GPS.
  3. NavIC provides two types of services:
    • Standard positioning service – This is meant for all users.
    • Restricted service – Encrypted service which is provided only to authorized users like military and security agencies.
  4. Applications of IRNSS:
    • Terrestrial, aerial and marine navigation
    • Disaster management
    • Vehicle tracking and fleet management
    • Precise timing mapping and geodetic data capture
    • Terrestrial navigation aid for hikers and travelers
    • Visual and voice navigation for drivers
  5. Operational Mechanism
    While American GPS has 24 satellites in orbit, the number of sats visible to the ground receiver is limited.In IRNSS, four satellites are always in geosynchronous orbits.Hence, each satellite is always visible to a receiver in a region 1,500 km around India
  6. Navigation Constellation
    It consists of seven satellites: three in geostationary earth orbit (GEO) and four in geosynchronous orbit (GSO) inclined at 29 degrees to the equator.
  7. Each sat has three rubidium atomic clocks, which provide accurate locational data.
  8. The first naval, IRNSS-1A, was launched on July 1, 2013, and seventh of the series (last one) was launched on April 28, 2016.
  9. Though desi navigation system is operational, its services are not yet ready for commercial purpose.

This is because the chipset required for wireless devices like the cell phone to access navigation services is still being developed by Isro and is yet to hit the market.

The four deployed satellites are IRNSS-1A, IRNSS-1B, IRNSS-1C, IRNSS-1D. Further IRNSS-E is planned to be launched by January and IRNSS-F, G by March 2016.

IRNSS will provide two types of services, namely, Standard Positioning Service (SPS) which is provided to all the users and Restricted Service (RS), which is an encrypted service provided only to the authorized users.
ISRO is recommending a small additional hardware for handheld devices that can receive S-Band signals from IRNSS satellites and inclusion of a code in the phone software to receive L-Band signals.
Senior ISRO official said that ? both these L and S-band signals received from seven satellite constellation of the IRNSS are being calculated by a special embedded software which reduces the errors caused by atmospheric disturbances significantly. This, in turn, gives a superior quality location accuracy than the American GPS system.
At present, only Americas GPS and Russias GLONASS (GLObal NAvigation Satellite System) are independent and fully functional navigational systems. India will be the third country to have its own navigational system.
The main advantage of Indian own navigational system is that India wont be dependent on USs GP System for defense operations. India had no options till now. During Kargil war, Indian Army and Airforce had to use GPS. The information related to security operations are very confidential and should not be shared with anyone.

Hydrogen: Future Fuel

By Author – Rishabh Sontakke

 

Hydrogen fuel is a zero-emission fuel when burned with oxygen. It uses electrochemical cells, or combustion in internal engines, to power vehicles and electric devices. It is also used in the propulsion of spacecraft and might potentially be mass-produced and commercialized for passenger vehicles and aircraft.
Hydrogen lies in the first group and first period in the periodic table, i.e. it is the first element on the periodic table, making it the lightest element. Since hydrogen gas is so light, it rises in the atmosphere and is therefore rarely found in its pure form, H2. In a flame of pure hydrogen gas, burning in air, the hydrogen (H2) reacts with oxygen (O2) to form water (H2O) and releases energy.
2H2(g) + O2(g) 2H2O(g) + energy
If carried out in the atmospheric air instead of pure oxygen, as is usually the case, hydrogen combustion may yield small amounts of nitrogen oxides, along with the water vapor.
The energy released enables hydrogen to act as a fuel. In an electrochemical cell, that energy can be used with relatively high efficiency. If it is simply used for heat, the usual thermodynamics limits the thermal efficiency.
Since there is very little free hydrogen gas, hydrogen is in practice only as an energy carrier, like electricity, not an energy resource. Hydrogen gas must be produced, and that production always requires more energy than can be retrieved from the gas as a fuel later on. This is a limitation of the physical law of the conservation of energy. Most hydrogen production induces environmental impacts.

Hydrogen Production:

Because pure hydrogen does not occur naturally on Earth in large quantities, it takes a substantial amount of energy in its industrial production. There are different ways to produce it, such as electrolysis and steam-methane reforming process.
Electrolysis and steam-methane reforming process ?
In electrolysis, electricity is run through water to separate the hydrogen and oxygen atoms. This method can use wind, solar, geothermal, hydro, fossil fuels, biomass, nuclear, and many other energy sources. Obtaining hydrogen from this process is being studied as a viable way to produce it domestically at a low cost. Steam-methane reforming, the current leading technology for producing hydrogen in large quantities, extracts the hydrogen from methane. However, this reaction causes a side production of carbon dioxide and carbon monoxide, which are greenhouse gases and contribute to global warming.

Energy:

Hydrogen is locked up in enormous quantities in water, hydrocarbons, and other organic matter. One of the challenges of using hydrogen as a fuel comes from being able to efficiently extract hydrogen from these compounds. Currently, steam reforming, or combining high-temperature steam with natural gas, accounts for the majority of the hydrogen produced. Hydrogen can also be produced from water through electrolysis, but this method is much more energy demanding. Once extracted, hydrogen is an energy carrier (i.e. a store for energy first generated by other means). The energy can be delivered to fuel cells and generate electricity and heat or burned to run a combustion engine. In each case, hydrogen is combined with oxygen to form water. The heat in a hydrogen flame is a radiant emission from the newly formed water molecules. The water molecules are in an excited state on the initial formation and then transition to a ground state; the transition unleashing thermal radiation. When burning in air, the temperature is roughly 2000 C. Historically, carbon has been the most practical carrier of energy, as more energy is packed in fossil fuels than pure liquid hydrogen of the same volume. The carbon atoms have classic storage capabilities and releases even more energy when burned with hydrogen. However, burning carbon-based fuel and releasing its exhaust contributes to global warming due to the greenhouse effect of carbon gases. Pure hydrogen is the smallest element and some of it will inevitably escape from any known container or pipe in micro amounts, yet simple ventilation could prevent such leakage from ever reaching the volatile 4% hydrogen-air mixture. So long as the product is in a gaseous or liquid state, pipes are a classic and very efficient form of transportation. Pure hydrogen, though, causes the metal to become brittle, suggesting metal pipes may not be ideal for hydrogen transport.

Uses:

Hydrogen fuel can provide motive power for liquid-propellant rockets, cars, boats and airplanes, portable fuel cell applications or stationary fuel cell applications, which can power an electric motor. The problems of using hydrogen fuel in cars arise from the fact that hydrogen is difficult to store in either a high-pressure tank or a cryogenic tank
An alternative fuel must be technically feasible, economically viable, easily convert to another energy form when combusted, be safe to use, and be potentially harmless to the environment. Hydrogen is the most abundant element on earth. Although hydrogen does not exist freely in nature, it can be produced from a variety of sources such as steam reformation of natural gas, gasification of coal, and electrolysis of water. Hydrogen gas can use in traditional gasoline-powered internal combustion engines (ICE) with minimal conversions. However, vehicles with polymer electrolyte membrane (PEM) fuel cells provide a greater efficiency. Hydrogen gas combusts with oxygen to produce water vapor. Even the production of hydrogen gas can be emissions-free with the use of renewable energy sources. The current price of hydrogen is about $4 per kg, which is about the equivalent of a gallon of gasoline.

However, in fuel cell vehicles, such as the 2009 Honda FCX Clarity, 1 kg provides about 68 miles of travel. Of course, the price range is currently very high. Ongoing research and implementation of a hydrogen economy are required to make this fuel economically feasible. The current focus is directed toward hydrogen is a clean alternative fuel that produces insignificant greenhouse gas emissions. If hydrogen is the next transportation fuel, the primary energy source used to produce the vast amounts of hydrogen will not necessarily be a renewable, clean source. Carbon sequestration is referenced frequently as a means to eliminate CO2 emissions from the burning of coal, where the gases are captured and sequestered in gas wells or depleted oil wells. However, the availability of these sites is not widespread and the presence of CO2 may acidify groundwater. Storage and transport is a major issue due to hydrogens low density.

Is the investment in new infrastructure too costly?

Can our old infrastructure currently used for natural gas transport be retrofitted for hydrogen?

The burning of coal and nuclear fission are the main energy sources that will be used to provide an abundant supply of hydrogen fuel.

How does this process help our current global warming predicament? The U.S. Department of Energy has recently funded a research project to produce hydrogen from coal at large-scale facilities, with carbon sequestration in mind.

Is this the wrong approach? Should there be more focus on other forms of energy that produce no greenhouse gas emissions? If the damage to the environment is interpreted as a monetary cost, the promotion of energy sources such as wind and solar may prove to be a more economical approach.

The possibility of a hydrogen economy that incorporates the use of hydrogen into every aspect of transportation requires much further research and development. The most economical and major source of hydrogen in the US is the steam reformation of natural gas, a nonrenewable resource and a producer of greenhouse gases. The electrolysis of water is a potentially sustainable method of producing hydrogen, but only if renewable energy sources are used for the electricity. Today, less than 5% of our electricity comes from renewable sources such as solar, wind, and hydro. Nuclear power may be considered as a renewable resource to some, but the waste generated by this energy source becomes a major problem. A rapid shift toward renewable energy sources is required before this proposed hydrogen economy can prove itself. Solar photovoltaic (PV) systems are the current focus of my research related to the energy source required for electrolysis of water. One project conducted at the GM Proving Ground in Milford, MI employed the use of 40 solar PV modules directly connected to an electrolyzer/storage/dispenser system. The result was an 8.5% efficiency in the production of hydrogen, with an average production of 0.5 kg of high-pressure hydrogen per day. Research similar to this may result in the optimization of the solar hydrogen energy system. Furthermore, the infrastructure for a hydrogen economy will come with high capital costs. The transport of hydrogen through underground pipes seems to be the most economical when demand grows enough to require a large centralized facility. However, in places of low population density, this method may not be economically feasible. The project mentioned earlier may become an option for individuals to produce their own hydrogen gas at home, with solar panels lining their roof. A drastic change is needed to slow down the effects of our fossil fuel dependent society. Conservation can indeed help, but the lifestyles we are accustomed to requiring certain energy demands.

Li-Fi Technology

By Author – Rashmita Soge

 

Li-Fi is a technology for wireless communication between devices using light to transmit data. In its present state only LED lamps can be used for the transmission of visible light. LiFi is designed to use LED light bulbs similar to those currently in use in many energy-conscious homes and offices. However, LiFi bulbs are outfitted with a chip that modulates the light imperceptibly for optical data transmission. LiFi data is transmitted by the LED bulbs and received by photoreceptors. Li-Fi’s early developmental models were capable of 150 megabits-per-second (Mbps). Some commercial kits enabling that speed have been released. In the lab, with stronger LEDs and different technology, researchers have enabled 10 gigabits-per-second (Gbps), which is faster than 802.11ad. The term was first introduced by Harald Haas during a 2011 TED Global talk in Edinburgh. In technical terms, Li-Fi is a visible light communications system that is capable of transmitting data at high speeds over the visible light spectrum, ultraviolet, and infrared radiation. In terms of its end use the technology is similar to Wi-Fi. The key technical difference is that Wi-Fi uses radio frequency to transmit data. Using light to transmit data allows Li-Fi to offer several advantages like working across higher bandwidth, working in areas susceptible to electromagnetic interference (e.g. aircraft cabins, hospitals) and offering higher transmission speeds. The technology is actively being developed by several organizations across the globe.

Benefits of LiFi:-

  • Higher speeds than Wi-Fi.
  • 10000 times the frequency spectrum of radio.
  • More secure because data cannot be intercepted without a clear line of sight.
  • Prevents piggybacking.
  • Eliminates neighboring network interference.
  • Unimpeded by radio interference.
  • Does not create interference in sensitive electronics, making it better for use in environments like hospitals and aircraft.

By using LiFi in all the lights in and around a building, the technology could enable greater area of coverage than a single WiFi router. Drawbacks to the technology include the need for a clear line of sight, difficulties with mobility and the requirement that lights stay on for operation.

All the existing wireless technologies utilize different frequencies on the electromagnetic spectrum. While Wi-Fi uses radio waves, Li-Fi hitches information through visible light communication. Given this, the latter requires a photo-detector to receive light signals and a processor to convert the data into streamable content. As a result, the semiconductor nature of LED light bulbs makes them a feasible source of high-speed wireless communication.

So, how does it work? Lets look at the working of Li-Fi:-

When a constant current source is applied to an LED bulb, it emits a constant stream of photons observed as visible light. When this current is varied slowly, the bulb dims up and down. As these LED bulbs are the semiconductor, the current and optical output can be modulated at extremely high speeds that can be detected by a photo-detector device and converted back to electrical current.

The intensity modulation is too quick to be perceived with the human eye and hence the communication seems to be seamless just like RF. So, the technique can help in transmitting high-speed information from an LED light bulb. However, its much simpler, unlike RF communication which requires radio circuits, antennas, and complex receivers.

Li-Fi uses direct modulation methods similar used in low-cost infrared communications devices like remote control units. Moreover, infra-red communication has limited powers due to safety requirements while LED bulbs have intensities high enough to achieve very large data rates.

 

Wi-Fi vs Li-Fi:-

Now that we know what Li-Fi is and how it works, the question is where it stands when compared to Wi-Fi. In order to get an understanding as to which one is superior, lets take a look at certain aspects of both the technologies:-

  • Speed:- Li-Fi can possibly deliver data transfer speeds of 224 gigabits per second which clearly- leaves Wi-Fi far behind. As per the tests conducted by pure LiFi, the technology produced over 100 Gbps in a controlled environment. Moreover, the visible light spectrum is 1,000 times larger than the 300 GHz of RF spectrum which helps in gaining high speed.
  • Energy Efficiency:- Usually, Wi-Fi needs two radios to communicate back and forth which takes a lot of energy to discern the signal from the noise as there may be several devices using the same frequency. Each device has an RF transmitter and baseband chip for enabling communication. However, as Li-Fi uses LED lights, the transmission requires minimal additional power for enabling communication.
  • Security:- One of the main differences between Wi-Fi and Li-Fi is that the former has a wider range (typically 32 meters) and can even be accessed throughout different portions of a building, however, the latter cant penetrate through walls and ceilings and hence its more secure. Although that would mean fitting a separate LED bulb in all the rooms, yet the technology can be ideal for sensitive operations like R&D, Defense, Banks, etc. So, in a way, its not subject to remote piracy and hacking as opposed to Wi-Fi.
  • Data Density:- Owing to the interference issues, Wi-Fi works in a less dense environment while Li-Fi works in a highly dense environment. The area covered by one Wi-Fi access point has 10s or 100s of lights and each LiFi light can deliver the same speed or greater than a Wi-Fi access point. Therefore, in the same area, LiFi can provide 10, or 100, or 1000 times greater wireless capacity.

Future Scope:-

Li-Fi provides a great platform to explore the grounds of transmission of wireless data at high rates. If this technology is put into practical use, each light bulb installed is potential and can be used as a Wi-Fi hotspot to transmit data in a cleaner, greener and safer manner. The applications of Li-Fi are beyond imagination at the moment. With this enhanced technology, people can access wireless data with the LEDs installed on the go at very high rates. It resolves the problem of shortage of radio frequency bandwidth. In various military applications, where RF-based communications are not allowed, Li-Fi could be a viable alternative to securely pass data at high rates to other military vehicles. Also, LEDs can be used effectively to carry out VLC in many hospital applications where RF-based communications could be potentially dangerous. Since light cannot penetrate through walls, it could be a limitation to this technology. Nevertheless, given its high rates of data transmission and applications in multiple fields, Li-Fi is definitely the future of wireless communication.

Request a Free Estimate
Enter Your Information below and we will get back to you with an estimate within few hours
0