Business

Generic Visual Perception Processor

By Author – Ashish Kasture

 

The Generic visual perception processor is a single chip modeled on the perception capabilities of the human brain, which can detect objects in a motion video signal and then locate and track them in real time. Imitating the human eyes neural networks and the brain. This chip can handle about 20 billion instructions per second. This electronic eye on the chip can handle a task that ranges from sensing the variable parameters as in the form of video signals and then process it for co-Generic visual perception processor is a single chip modeled on the perception capabilities of the human brain, which can detect objects in a motion video signal and then locate and track them in real time. Imitating the human eyes neural networks and the brain, the chip can handle about 20 billion instructions per second.
Generic Visual Perception Processor can automatically detect objects and track their movement in real-time. The GVPP, which crunches 20 billion instructions per second (BIPS), models the human perceptual process at the hardware level by mimicking the separate temporal and spatial functions of the eye-to-brain system. The processor sees its environment as a stream of histograms regarding the location and velocity of objects. GVPP has been demonstrated as capable of learning-in-place to solve a variety of pattern recognition problems. It boasts automatic normalization for varying object size, orientation and lighting conditions, and can function in daylight or darkness. This electronic “eye” on a chip can now handle most tasks that a normal human eye can.
That includes driving safely, selecting ripe fruits, reading and recognizing things. Sadly, though modeled on the visual perception capabilities of the human brain, the chip is not really a medical marvel, poised to cure the blind. The GVPP tracks an “object,” defined as a certain set of hue, luminance and saturation values in a specific shape, from frame to frame in a video stream by anticipating where it’s leading and trailing edges make “differences” with the background. That means it can track an object through varying light sources or changes in size, as when an object gets closer to the viewer or moves farther away. The GVPP’S major performance strength over current-day vision systems is its adaptation to varying light conditions. Today’s vision systems dictate uniform shadowless illumination and even next-generation prototype systems, designed to work under normal lighting conditions, can be used only dawn to dusk.

The GVPP on the other hand, adapt to real-time changes in lighting without recalibration, day or light. For many decades the field of computing has been trapped by the limitations of the traditional processors. Many futuristic technologies have been bound by limitations of these processors. These limitations stemmed from the basic architecture of these processors. Traditional processors work by slicing each and every complex program into simple tasks that a processor could execute. This requires an existence of an algorithm for the solution of the particular problem. But there are many situations where there is an inexistence of an algorithm or inability of a human to understand the algorithm.
Even in these extreme cases, GVPP performs well. It can solve a problem with its neural learning function. Neural networks are extremely faulted tolerant. By their design even if a group of neurons gets, the neural network only suffers a smooth degradation of the performance. It won’t abruptly fail to work. This is a crucial difference, from traditional processors as they fail to work even if a few components are damaged. GVPP recognizes stores, matches and process patterns. Even if the pattern is not recognizable to a human programmer in input the neural network, it will dig it out from the input. Thus GVPP becomes an efficient tool for applications like the pattern matching and recognition last references are given.

What is Security Trend?

By Author -Rashmita Soge

 

Looking ahead, a number of emerging IT security advances will arm organizations with the right information at the right time to help spot and mitigate potential breaches before they can occur. Here, in no particular order, Collaborate, contribute, consume and create knowledge about todays top security trends, help to identify security issues that are relevant and emerging as well as issues that need more guidance.
Overall, security trends will closely follow technical trends for a particular year. If AI(artificial intelligence), IoT(internet of things), Data privacy are said to be a game changer in technical industry for 2018.
Here are the key technology trends for 2018 that are anticipated to have an even greater impact on businesses and government in the relentless pursuit to increase efficiencies and enhance connectivity.
Internet of Things (IoT):
Internet-enabled devices are expected to continue their momentum in 2018 and connect even more everyday objects. IoT devices present some very serious security issues. Most IoT devices were never built to withstand even very simple cyber-attacks. Most IoT devices are unable to be patched or upgraded, and will, therefore, remain vulnerable to cyber hacks or breaches.
If compromised, IoT devices with security vulnerabilities can result in a range of issues, such as a simple denial of service, privacy compromises, major infrastructure failures, or even death. There are well-known methods of improving the security of IoT devices, such as implementing additional protection steps and processes, but these have other drawbacks, such as higher costs, and user inconvenience. Government regulation is needed to set national frameworks in place to ensure devices have minimum standards of protection.
Artificial Intelligence (AI):
Many vendors have jumped onto the AI bandwagon in order to build “smarter” systems that can detect and act on security threats, either before or very early after the information has been compromised. I expect to see this continue in 2018, with computers becoming more intuitive and defensive.This presents a strong opportunity for developers, with a growing demand for systems to be increasingly intelligent and alert to potential cyber risks.
Just as AI has the potential to boost productivity for businesses and government, hackers will look to AI to find vulnerabilities in software, with machine-like efficiency, to hack into systems in a fraction of the time it would take a human being. This and other scenarios make AI one of the key technology trends to watch and mitigate risk for in 2018.
Cryptocurrency:
Cryptocurrency such as Bitcoin has come on to the global agenda in 2017, and with it surging in value, it begs the question of how can owners ensure it is protected and legitimate?
Current cryptocurrency systems have significant issues with scale and performance and may be susceptible to quantum computer attacks in the future. Cryptocurrency systems need to evolve to overcome these issues, and investors in cryptocurrency need to place more emphasis on the security strategies and systems in place at their providers or risk losing their capital overnight.
Cloud Computing:
Many high-profile breaches in recent years have demonstrated the vulnerabilities of cloud computing and how it continues to be a significant issue. The ongoing question of how businesses and individuals can manage their data remotely while ensuring its protected will continue to resonate in 2018.
While many users will judge that the risk of data compromise does not warrant local control of information when compared with the benefits of convenience and low price of cloud services, a growing number of potential users will come to realize that for them, the risk of data compromise may be too high. This will be particularly true for government agencies, defense, intelligence, banking and finance, and legal services.
Data Privacy:
Data privacy also called information privacy, is the aspect of information technology (IT) that deals with the ability an organization or individual has to determine what data in a computer system can be shared with third parties. Data privacy continues to be a lost issue with every new device monitoring our conversation, location, likes, dislikes. There is a huge electronic virtual dictionary being built on us with the digital footprint that we are constantly leaving. This will continue into 2018 and beyond.

What is Hawk-Eye?

By Author – Rishabh Sontakke

 

Hawk-Eye is a computer system used in numerous sports such as cricket, tennis, Gaelic football, badminton, hurling, Rugby Union, association football, and volleyball, to visually track the trajectory of the ball and display a profile of its statistically most likely path as a moving image.
The Sony-owned Hawk-Eye system was developed in the United Kingdom by Paul Hawkins. The system was originally implemented in 2001 for television purposes in cricket. The system works via six (sometimes seven) high-performance cameras, normally positioned on the underside of the stadium roof, which track the ball from different angles. The video from the six cameras is then triangulated and combined to create a three-dimensional representation of the ball’s trajectory. Hawk-Eye is not infallible, but is accurate to within 3.6 millimeters and generally trusted as an impartial second opinion in sports.
It has been accepted by governing bodies in tennis, cricket, and association football as a means of adjudication. Hawk-Eye is used for the Challenge System since 2006 in tennis and Umpire Decision Review System in cricket since 2009. The system was rolled out for the 2013-14 Premier League season as a means of goal-line technology. In December 2014 the clubs of the first division of Bundesliga decided to adopt this system for the 2015-16 season.

How does it work?

The whole setup involves six high-speed vision processing cameras along with two broadcast cameras. When a delivery is bowled, the position of the ball recorded in each camera is combined to form a virtual 3D positioning of the ball after its being delivered. The whole process of the delivery is broken into two parts, delivery to bounce and bounce to impact. Multiple frames of the ball position are measured and through this, you can calculate the direction, speed, swing, and dip of that specific delivery.

Deployment in sports

  1. Cricket:
    The technology was first used by Channel 4 during a Test match between England and Pakistan on Lord’s Cricket Ground, on 21 May 2001. It is used primarily by the majority of television networks to track the trajectory of balls in flight. In the winter season of 2008/2009, the ICC trialed a referral system where Hawk-Eye was used for referring decisions to the third umpire if a team disagreed with an LBW decision. The third umpire was able to look at what the ball actually did up to the point when it hit the batsman, but could not look at the predicted flight of the ball after it hit the batsman.
    Its major use in cricket broadcasting is in analyzing leg before wicket decisions, where the likely path of the ball can be projected forward, through the batsmans legs, to see if it would have hit the stumps. Consultation with the third umpire, for conventional slow motion or Hawk-Eye, on the leg before wicket decisions, is currently sanctioned in international cricket even though doubts remain about its accuracy.
    The Hawk-Eye referral for an LBW decision is based on three criteria:
    ? Where the ball pitched.
    ? The location of impact with the leg of the batsman.
    ? The projected path of the ball past the batsman.
    In all three cases, marginal calls result in the on-field call being maintained.
    Due to its real-time coverage of bowling speed, the systems are also used to show delivery patterns of a bowler’s behavior such as line and length, or swing/turn information. At the end of an over, all six deliveries are often shown simultaneously to show a bowler’s variations, such as slower deliveries, bouncers, and leg-cutters. A complete record of a bowler can also be shown over the course of a match.
    Batsmen also benefit from the analysis of Hawk-Eye, as a record can be brought up of the deliveries from which a batsman scored. These are often shown as a 2-D silhouetted figure of batsmen and color-coded dots of the balls faced by the batsman. Information such as the exact spot where the ball pitches or speed of the ball from the bowler’s hand (to gauge batsman reaction time) can also help in post-match analysis.
  2. Tennis:
    In late 2005 Hawk-Eye was tested by the International Tennis Federation (ITF) in New York City and was passed for professional use. Hawk-Eye reported that the New York tests involved 80 shots being measured by the ITF’s high-speed camera, a device similar to MacCAM. During an early test of the system at an exhibition tennis tournament in Australia (seen on local TV), there was an instance when the tennis ball was shown as “Out”, but the accompanying word was “In”. This was explained to be an error in the way the tennis ball was shown on the graphical display as a circle, rather than as an ellipse. This was immediately corrected.
    Hawk-Eye has been used in television coverage of several major tennis tournaments, including Wimbledon, the Queen’s Club Championships, the Australian Open, the Davis Cup and the Tennis Masters Cup. The US Open Tennis Championship announced they would make official use of the technology for the 2006 US Open where each player receives two challenges per set. It is also used as part of a larger tennis simulation implemented by IBM called PointTracker.
    The 2006 Hopman Cup in Perth, Western Australia, was the first elite-level tennis tournament where players were allowed to challenge point-ending line calls, which were then reviewed by the referees using Hawk-Eye technology. It used 10 cameras feeding information about ball position to the computers. Jamea Jackson was the first player to challenge a call using the system.
    In March 2006, at the Nasdaq-100 Open in Miami, Hawk-Eye was used officially for the first time at a tennis tour event. Later that year, the US Open became the first grand-slam event to use the system during play, allowing players to challenge line calls.
    The 2007 Australian Open was the first grand-slam tournament of 2007 to implement Hawk-Eye in challenges to line calls, where each tennis player in Rod Laver Arena was allowed two incorrect challenges per set and one additional challenge should a tiebreaker be played. In the event of an advantage final set, challenges were reset to two for each player every 12 games, i.e. 6 all, 12 all. Controversies followed the event as at times Hawk-Eye produced erroneous output. In 2008, tennis players were allowed three incorrect challenges per set instead. Any leftover challenges did not carry over to the next set. Once, Amlie Mauresmo challenged a ball that was called in, and Hawk-Eye showed the ball was out by less than a millimeter, but the call was allowed to stand. As a result, the point was replayed and Mauresmo did not lose an incorrect challenge.
    The Hawk-Eye technology used in the 2007 Dubai Tennis Championships had some minor controversies. Defending champion Rafael Nadal accused the system of incorrectly declaring an out ball to be in following his exit. The umpire had called a ball out; when Mikhail Youzhny challenged the decision, Hawk-Eye said it was in by 3 mm. Youzhny said after that he himself thought the mark may have been wide but then offered that this kind of technology error could easily have been made by linesmen and umpires. Nadal could only shrug, saying that had this system been on clay, the mark would have clearly shown Hawk-Eye to be wrong. the area of the mark left by the ball on hard court is a portion of the total area that the ball was in contact with the court (a certain amount of pressure is required to create the mark)
    The 2007 Wimbledon Championships also implemented the Hawk-Eye system as an officiating aid on Centre Court and Court 1, and each tennis player was allowed three incorrect challenges per set. If the set produced a tiebreaker, each player was given an additional challenge. Additionally, in the event of a final set (third set in women’s or mixed matches, fifth set in men’s matches), where there is no tiebreak, each player’s number of challenges was reset to three if the game score reached 6?6, and again at 12?12. Teymuraz Gabashvili, in his first-round match against Roger Federer, made the first ever Hawk-Eye challenge on Centre Court. Additionally, during the finals of Federer against Rafael Nadal, Nadal challenged a shot which was called out. Hawk-Eye showed the ball as in, just clipping the line. The reversal agitated Federer enough for him to request (unsuccessfully) that the umpire turns off the Hawk-Eye technology for the remainder of the match.
    In the 2009 Australian Open fourth round match between Roger Federer and Tom Berdych, Berdych challenged an out call. The Hawk-Eye system was not available when he challenged, likely due to a particularly pronounced shadow on the court. As a result, the original call stood.
    In the 2009 Indian Wells Masters quarterfinals match between Iva Ljubii and Andy Murray, Murray challenged an out call. The Hawk-Eye system indicated that the ball landed in the center of the line despite instant replay images showing that the ball was clearly out. It was later revealed that the Hawk-Eye system had mistakenly picked up the second bounce, which was on the line, instead of the first bounce of the ball. Immediately after the match, Murray apologized to Ljubicic for the call and acknowledged that the point was out.
    The Hawk-Eye system was developed as a replay system, originally for TV broadcast coverage. As such, it initially could not call-ins and outs live.
    The Hawk-Eye Innovations website states that the system performs with an average error of 3.6 mm. The standard diameter of a tennis ball is 67 mm, equating to a 5% error relative to ball diameter. This is roughly equivalent to the fluff on the ball.
    Currently, only clay court tournaments, notably the French Open is the only Grand Slam, are found to be generally free of Hawk-Eye technology due to marks left on the clay where the ball bounced to evidence a disputed line call. Chair umpires are then required to get out of their seat and examine the mark on the court with the player by his side to discuss the chair umpire’s decision.

Unification of Rules

Until March 2008, the International Tennis Federation (ITF), Association of Tennis Professionals (ATP), Women’s Tennis Association (WTA), Grand Slam Committee, and several individual tournaments had conflicting rules on how Hawk-Eye was to be utilized. A key example of this was the number of challenges a player was permitted per set, which varied among events. Some tournaments allowed players a greater margin for error, with players allowed an unlimited number of challenges over the course of a match. In other tournaments, players received two or three per set. On 19 March 2008, the aforementioned organizing bodies announced a uniform system of rules: three unsuccessful challenges per set, with an additional challenge if the set reaches a tiebreak. In an advantage set (a set with no tiebreak) players are allowed three unsuccessful challenges every 12 games. The next scheduled event on the men and women’s tour, the 2008 Sony Ericsson Open, was the first event to implement these new, standardized rules.

    1. Association football
      Hawk-Eye is one of the goal-line technology (GLT) systems authorized by FIFA. Hawk-Eye tracks the ball and informs the referee if a ball fully crosses the goal line into the goal. The purpose of the system is to eliminate errors in assessing if a goal was scored. The Hawk-Eye system was one of the systems trialed by the sport’s governors prior to the 2012 change to the Laws of the Game that made GLT a permanent part of the game, and it has been used in various competitions since then. GLT is not compulsory and, owing to the cost of Hawk-Eye and its competitors, systems are only deployed in a few high-level competitions.
      As of July 2017, licensed Hawk-Eye systems are installed at 96 stadiums. By the number of installations, Hawk-Eye is the most popular GLT system.Hawk-Eye is the system used in the Premier League, Bundesliga among other leagues.
    2. Snooker
      At the 2007 World Snooker Championship, the BBC used Hawk-Eye for the first time in its television coverage to show player views, particularly of potential snooker. It has also been used to demonstrate intended shots by players when the actual shot has gone awry. It is now used by the BBC at every World Championship, as well as some other major tournaments. The BBC used to use the system sporadically, for instance in the 2009 Masters at Wembley the Hawk-Eye was at most used once or twice per frame. Its usage has decreased significantly and is now only used within the World Championships and very rarely in any other tournament on the snooker tour. In contrast to tennis, Hawk-Eye is never used in snooker to assist referees’ decisions and primarily used to assist viewers in showing what the player is facing.
    3. Gaelic games
      In Ireland, Hawk-Eye was introduced for all Championship matches at Croke Park in Dublin in 2013. This followed consideration by the Gaelic Athletic Association (GAA) for its use in Gaelic football and hurling. A trial took place in Croke Park on 2 April 2011. The doubleheader featured football between Dublin and Down and hurling between Dublin and Kilkenny. Over the previous two seasons, there had been many calls for the technology to be adopted, especially from Kildare fans, who saw two high-profile decisions go against their team in important games. The GAA said it would review the issue after the 2013 Sam Maguire Cup was presented.
      Hawk-Eye’s use was intended to eliminate contentious scores. It was first used in the Championship on Saturday 1 June 2013 for the Kildare versus Offaly game, part of a doubleheader with the second game of Dublin versus Westmeath. It was used to confirm that Offaly substitute Peter Cunningham’s attempted point had gone wide 10 minutes into the second half.

Use of Hawk-Eye was suspended during the 2013 All-Ireland hurling semi-finals on 18 August due to a human error during an Under-18 hurling game between Limerick and Galway. During the minor game, Hawk-Eye ruled a point for Limerick as a miss although the graphic showed the ball passing inside the posts, causing confusion around the stadium – the referee ultimately waved the valid point wide provoking anger from fans, viewers and TV analysts covering the game live. The system was subsequently stood down for the senior game which followed, owing to “an inconsistency in the generation of a graphic”. Limerick, who was narrowly defeated after extra-time, announced they would be appealing over Hawk-Eye’s costly failure. Hawk-Eye apologized for this incident and admitted that it was a result of human error. There have been no further incidents during the GAA. The incident drew attention from the UK, where Hawk-Eye had made its debut in English football’s Premier League the day before.
Hawk-Eye was introduced to a second venue, Semple Stadium, Thurles, in 2016. There is no TV screen at Semple: instead, an electronic screen displays a green T if a score has been made, and a red Nl if the shot is wide.
It was used at a third venue, Pirc U Chaoimh, Cork, in July 2017, for the All-Ireland hurling quarter-finals between Clare versus Tipperary and Wexford versus Waterford.

  1. Australian football
    On 4 July 2013, the Australian Football League announced that they would be testing Hawk-Eye technology to be used in the Score Review process. Hawk-Eye was used for all matches played at the MCG during Round 15 of the 2013 AFL Season. The AFL also announced that Hawk-Eye was only being tested, and would not be used in any Score Reviews during the round.
  2. Badminton
    BWF introduced Hawk-Eye technology in 2014 after testing other instant review technologies for line call decision in BWF major events. Hawk-Eye’s tracking cameras are also used to provide shuttlecock speed and other insight in badminton matches. Hawk-Eye has formally introduced in 2014 India Super Series tournament.

Parasitic Computing

By Author – Samata Shelare

 

It is a generalized problem format into which more specific tasks can be mapped, this opens up the possibility of using the communication protocols provided by Internet hosts as a massive distributed computer. Whats more, the computers that participate in the endeavor are unwitting participants?from their perspective, they are merely responding (or not) to TCP traffic. Parasitic computing is especially interesting because the exploit doesnt compromise the security of the affected computers, it piggybacks a math problem onto the TCP checksum work which TCP enabled hosts to carry out under routine operating conditions.
Parasitic Computing Described:
TCP checksums are normally used to ensure data corruption hasnt occurred somewhere along a packets journey from one computer to another along what is usually a multi-hop route across the Internet. The transmitting computer adds a two-byte checksum field to the TCP header which is a function of the routing information and the data payload of the packet. The idea is that if corruption occurs in the transport or physical layers, the receiving computer will detect this because the presented checksum no longer corresponds with the data received.
Parasitic computing on TCP checksums maps a new problem set onto the TCP checksum function. In the particular instance discussed in BFJB, the technique is to compute a checksum which corresponds to an answer set for a particular boolean satisfiability problem and then to send out packets with data payloads which are potential solutions to that problem. Receiving computers will attempt to validate the checksum against the data payload, effectively checking the solution they were offered the problem under consideration. If the checksum validates, properly configured hosts will reply, and the parasitic computer knows that a solution to the problem has been found. The value of this model is that problems for which there is no known efficient algorithm can be solved by brute-force through parallelization to many hosts.
Schematic illustration of Parasitic Computing from BFJB. Our experiment implemented a modified version of the Parasitic Computing idea: we didnt rely on the HTTP layer as shown in the drawing. We modified the SYN request packet and listened for an SYN-ACK response. Using the handshake step avoids the overhead of establishing the connection beforehand, but may have introduced the false positives we discuss below.
The TCP checksum function breaks the information it checks into 16-bit words and then sums them. Whenever the sum gets larger than 0xffff, the additional column is carried back around (so 0xffff + 0x0001 = 0x0001). The 1s complement of this sum is written down as the checksum. The trick presented in BFJB is to take advantage of the correlation between a numeric sum and a boolean operation: if a and b are bits, and summing results in 2, then upon treatment as booleans a b is TRUE; similarly, if summing a + band results in a 1, then a ? b is TRUE. BFJB provides the truth table showing the relationship between these two boolean operators and the mathematical sum. This is the basis of mapping boolean satisfiability into the TCP checksum space.
In more detail, a boolean satisfiability problem asks whether an assignment of truth-values exists which will allow a given formula to evaluate to TRUE. Typically, these formulae are presented in conjunctive normal form (an AND of ORs). However, the problem exemplified in BFJB allows either ? or to appear in a clause (aside: its noteworthy that is derivable from ? and : [a b] = [a ? b] ? [a b], but more on this below). The example in BFJB is built from the specific case where there are 16 unique variables and 8 clauses.
The first step, then, in finding a solution to this problem parasitically is to calculate a checksum which is in correspondence with an assignment which solves the problem. To do this, BFJB takes each of the 8 clauses in some order and generate a checksum based on the operator involved: for every , write down 10, and for every ?, we write down 01. Then, a checksum is created by taking the 1s complement. This value is the magic checksum which corresponds to a solution to our problem.

After that packets with variable assignments as data payloads can be generated and sent to TCP enabled hosts on the Internet. Each variable is padded with a 0 and then column-aligned with its clause operator according to the ordering used for the checksum creation. The padding is crucial because it is what allows a pair of variables with the operator to sum to 10, without overflowing into the column to the left.
For example, under the assignment where x1 = 1 and x2 = 1 and x3 = 1 and x4 = 0 and x15 = 0 and x16 = 1 we generate two 16 bit words with padding and the operands lined up in columns.
0101…00
0100…01
This packet is then sent to a network host which will evaluate the checksum against the data. Because TCP_CHECKSUM(data) = magic checksum only when the data solves our problem, only those hosts which received a good solution will reply. Each logically possible assignment is generated as a packet payload and sent to a TCP enabled host for evaluation and in this way we have the Internet as a parallel distributed computing for finding solutions to our boolean formulae!
Unfortunately, this relatively high-level description leaves a fair bit of implementation detail to the reader and so we set out to fill in the gaps in order to reproduce the reported results.

HACKABALL – MOST ENGAGING UX FOR DIGITAL EDUCATION

By Author – Shubhangi Agrawal

 

Lifehack (or life-hacking) refers to any trick, shortcut, skill, or novelty method that increases productivity and efficiency, in all walks of life. The term was primarily used by computer experts who suffer from information overload or those with a playful curiosity in the ways they can accelerate their workflow in ways other than programming.

One of the most amazing life hacks is Hackaball.
Hackaball is a computer you can throw that allows children to program their own games.
Computer-related employment is expected to rise by 22% by 2020. This year, England became the first country to make computer programming a compulsory school subject, and in the U.S., organizations are lobbying for programming to be available to students in every school.
It is a device that would encourage even the youngest children to learn necessary skills that will better prepare them for an ever-more tech-focused world.
Hackaball is a smart and responsive smartphone-connected gadget designed to teach 6-10-year-olds the principles of coding through fun, physical and mental play. The ball is paired with an iOS application that lets children create their own games and program them onto the device as well as play the classics. Sensors inside the sturdy Hackaball encourage kids to experiment with sound, light, vibration, and movement. The potential for creating games is as limitless as a childs imagination.
Hackaball quickly caught the attention and captured the imagination of the public.
How does it work?
The computer inside Hackaball has sensors that detect motions like being dropped, bounced, kicked, shaken or being perfectly still. Children hack the ball with an iOS or OS X app which allows them to go in and change the behavior of Hackaball to do what they want.
The paired iOS or OS X app comes pre-loaded with several games that can be sent to Hackaball to get kids started. Once they’ve mastered these initial games, kids can create brand new ones using a simple building block interface, experimenting with Hackaball’s sounds, LED lighting effects and rumble patterns. You can install the app on as many iPads an iPhones as you like, it’s free
Hackaball grows the more they play. Rewarding kids with unlockable features, challenging them with broken games to fix and the ability to share their creations with friends.
The variety of games children make and play are limited only by their imagination.

Indian Regional Navigation Satellite System

By Author – Samata Shelare

 

India is looking forward to starting its own Navigational system. At present most of the countries are dependent on Americas Global Positioning System (GPS). India is all set to have its own navigational system named Indian Regional Navigation Satellite System (IRNSS). IRNSS is going to be fully functional by mid of 2016. IRNSS is designed to provide accurate position information throughout India. It also covers 1500 km of the region around the boundary of India. IRNSS would have 7 satellites, out of which 4 are already placed in orbit. The fully deployed IRNSS system consists of 3 satellites in GEO orbit and 4 satellites in GSO orbit, approximately 36,000 km altitude above the earth surface.

  1. Indian Regional Navigation Satellite System or IRNSS (NavIC) is designed to provide accurate real-time positioning and timing services to users in India as well as the region extending up to 1,500 km from its boundary.
  2. It is an independent regional navigation satellite system developed by India on par with US-based GPS.
  3. NavIC provides two types of services:
    • Standard positioning service – This is meant for all users.
    • Restricted service – Encrypted service which is provided only to authorized users like military and security agencies.
  4. Applications of IRNSS:
    • Terrestrial, aerial and marine navigation
    • Disaster management
    • Vehicle tracking and fleet management
    • Precise timing mapping and geodetic data capture
    • Terrestrial navigation aid for hikers and travelers
    • Visual and voice navigation for drivers
  5. Operational Mechanism
    While American GPS has 24 satellites in orbit, the number of sats visible to the ground receiver is limited.In IRNSS, four satellites are always in geosynchronous orbits.Hence, each satellite is always visible to a receiver in a region 1,500 km around India
  6. Navigation Constellation
    It consists of seven satellites: three in geostationary earth orbit (GEO) and four in geosynchronous orbit (GSO) inclined at 29 degrees to the equator.
  7. Each sat has three rubidium atomic clocks, which provide accurate locational data.
  8. The first naval, IRNSS-1A, was launched on July 1, 2013, and seventh of the series (last one) was launched on April 28, 2016.
  9. Though desi navigation system is operational, its services are not yet ready for commercial purpose.

This is because the chipset required for wireless devices like the cell phone to access navigation services is still being developed by Isro and is yet to hit the market.

The four deployed satellites are IRNSS-1A, IRNSS-1B, IRNSS-1C, IRNSS-1D. Further IRNSS-E is planned to be launched by January and IRNSS-F, G by March 2016.

IRNSS will provide two types of services, namely, Standard Positioning Service (SPS) which is provided to all the users and Restricted Service (RS), which is an encrypted service provided only to the authorized users.
ISRO is recommending a small additional hardware for handheld devices that can receive S-Band signals from IRNSS satellites and inclusion of a code in the phone software to receive L-Band signals.
Senior ISRO official said that ? both these L and S-band signals received from seven satellite constellation of the IRNSS are being calculated by a special embedded software which reduces the errors caused by atmospheric disturbances significantly. This, in turn, gives a superior quality location accuracy than the American GPS system.
At present, only Americas GPS and Russias GLONASS (GLObal NAvigation Satellite System) are independent and fully functional navigational systems. India will be the third country to have its own navigational system.
The main advantage of Indian own navigational system is that India wont be dependent on USs GP System for defense operations. India had no options till now. During Kargil war, Indian Army and Airforce had to use GPS. The information related to security operations are very confidential and should not be shared with anyone.

Hydrogen: Future Fuel

By Author – Rishabh Sontakke

 

Hydrogen fuel is a zero-emission fuel when burned with oxygen. It uses electrochemical cells, or combustion in internal engines, to power vehicles and electric devices. It is also used in the propulsion of spacecraft and might potentially be mass-produced and commercialized for passenger vehicles and aircraft.
Hydrogen lies in the first group and first period in the periodic table, i.e. it is the first element on the periodic table, making it the lightest element. Since hydrogen gas is so light, it rises in the atmosphere and is therefore rarely found in its pure form, H2. In a flame of pure hydrogen gas, burning in air, the hydrogen (H2) reacts with oxygen (O2) to form water (H2O) and releases energy.
2H2(g) + O2(g) 2H2O(g) + energy
If carried out in the atmospheric air instead of pure oxygen, as is usually the case, hydrogen combustion may yield small amounts of nitrogen oxides, along with the water vapor.
The energy released enables hydrogen to act as a fuel. In an electrochemical cell, that energy can be used with relatively high efficiency. If it is simply used for heat, the usual thermodynamics limits the thermal efficiency.
Since there is very little free hydrogen gas, hydrogen is in practice only as an energy carrier, like electricity, not an energy resource. Hydrogen gas must be produced, and that production always requires more energy than can be retrieved from the gas as a fuel later on. This is a limitation of the physical law of the conservation of energy. Most hydrogen production induces environmental impacts.

Hydrogen Production:

Because pure hydrogen does not occur naturally on Earth in large quantities, it takes a substantial amount of energy in its industrial production. There are different ways to produce it, such as electrolysis and steam-methane reforming process.
Electrolysis and steam-methane reforming process ?
In electrolysis, electricity is run through water to separate the hydrogen and oxygen atoms. This method can use wind, solar, geothermal, hydro, fossil fuels, biomass, nuclear, and many other energy sources. Obtaining hydrogen from this process is being studied as a viable way to produce it domestically at a low cost. Steam-methane reforming, the current leading technology for producing hydrogen in large quantities, extracts the hydrogen from methane. However, this reaction causes a side production of carbon dioxide and carbon monoxide, which are greenhouse gases and contribute to global warming.

Energy:

Hydrogen is locked up in enormous quantities in water, hydrocarbons, and other organic matter. One of the challenges of using hydrogen as a fuel comes from being able to efficiently extract hydrogen from these compounds. Currently, steam reforming, or combining high-temperature steam with natural gas, accounts for the majority of the hydrogen produced. Hydrogen can also be produced from water through electrolysis, but this method is much more energy demanding. Once extracted, hydrogen is an energy carrier (i.e. a store for energy first generated by other means). The energy can be delivered to fuel cells and generate electricity and heat or burned to run a combustion engine. In each case, hydrogen is combined with oxygen to form water. The heat in a hydrogen flame is a radiant emission from the newly formed water molecules. The water molecules are in an excited state on the initial formation and then transition to a ground state; the transition unleashing thermal radiation. When burning in air, the temperature is roughly 2000 C. Historically, carbon has been the most practical carrier of energy, as more energy is packed in fossil fuels than pure liquid hydrogen of the same volume. The carbon atoms have classic storage capabilities and releases even more energy when burned with hydrogen. However, burning carbon-based fuel and releasing its exhaust contributes to global warming due to the greenhouse effect of carbon gases. Pure hydrogen is the smallest element and some of it will inevitably escape from any known container or pipe in micro amounts, yet simple ventilation could prevent such leakage from ever reaching the volatile 4% hydrogen-air mixture. So long as the product is in a gaseous or liquid state, pipes are a classic and very efficient form of transportation. Pure hydrogen, though, causes the metal to become brittle, suggesting metal pipes may not be ideal for hydrogen transport.

Uses:

Hydrogen fuel can provide motive power for liquid-propellant rockets, cars, boats and airplanes, portable fuel cell applications or stationary fuel cell applications, which can power an electric motor. The problems of using hydrogen fuel in cars arise from the fact that hydrogen is difficult to store in either a high-pressure tank or a cryogenic tank
An alternative fuel must be technically feasible, economically viable, easily convert to another energy form when combusted, be safe to use, and be potentially harmless to the environment. Hydrogen is the most abundant element on earth. Although hydrogen does not exist freely in nature, it can be produced from a variety of sources such as steam reformation of natural gas, gasification of coal, and electrolysis of water. Hydrogen gas can use in traditional gasoline-powered internal combustion engines (ICE) with minimal conversions. However, vehicles with polymer electrolyte membrane (PEM) fuel cells provide a greater efficiency. Hydrogen gas combusts with oxygen to produce water vapor. Even the production of hydrogen gas can be emissions-free with the use of renewable energy sources. The current price of hydrogen is about $4 per kg, which is about the equivalent of a gallon of gasoline.

However, in fuel cell vehicles, such as the 2009 Honda FCX Clarity, 1 kg provides about 68 miles of travel. Of course, the price range is currently very high. Ongoing research and implementation of a hydrogen economy are required to make this fuel economically feasible. The current focus is directed toward hydrogen is a clean alternative fuel that produces insignificant greenhouse gas emissions. If hydrogen is the next transportation fuel, the primary energy source used to produce the vast amounts of hydrogen will not necessarily be a renewable, clean source. Carbon sequestration is referenced frequently as a means to eliminate CO2 emissions from the burning of coal, where the gases are captured and sequestered in gas wells or depleted oil wells. However, the availability of these sites is not widespread and the presence of CO2 may acidify groundwater. Storage and transport is a major issue due to hydrogens low density.

Is the investment in new infrastructure too costly?

Can our old infrastructure currently used for natural gas transport be retrofitted for hydrogen?

The burning of coal and nuclear fission are the main energy sources that will be used to provide an abundant supply of hydrogen fuel.

How does this process help our current global warming predicament? The U.S. Department of Energy has recently funded a research project to produce hydrogen from coal at large-scale facilities, with carbon sequestration in mind.

Is this the wrong approach? Should there be more focus on other forms of energy that produce no greenhouse gas emissions? If the damage to the environment is interpreted as a monetary cost, the promotion of energy sources such as wind and solar may prove to be a more economical approach.

The possibility of a hydrogen economy that incorporates the use of hydrogen into every aspect of transportation requires much further research and development. The most economical and major source of hydrogen in the US is the steam reformation of natural gas, a nonrenewable resource and a producer of greenhouse gases. The electrolysis of water is a potentially sustainable method of producing hydrogen, but only if renewable energy sources are used for the electricity. Today, less than 5% of our electricity comes from renewable sources such as solar, wind, and hydro. Nuclear power may be considered as a renewable resource to some, but the waste generated by this energy source becomes a major problem. A rapid shift toward renewable energy sources is required before this proposed hydrogen economy can prove itself. Solar photovoltaic (PV) systems are the current focus of my research related to the energy source required for electrolysis of water. One project conducted at the GM Proving Ground in Milford, MI employed the use of 40 solar PV modules directly connected to an electrolyzer/storage/dispenser system. The result was an 8.5% efficiency in the production of hydrogen, with an average production of 0.5 kg of high-pressure hydrogen per day. Research similar to this may result in the optimization of the solar hydrogen energy system. Furthermore, the infrastructure for a hydrogen economy will come with high capital costs. The transport of hydrogen through underground pipes seems to be the most economical when demand grows enough to require a large centralized facility. However, in places of low population density, this method may not be economically feasible. The project mentioned earlier may become an option for individuals to produce their own hydrogen gas at home, with solar panels lining their roof. A drastic change is needed to slow down the effects of our fossil fuel dependent society. Conservation can indeed help, but the lifestyles we are accustomed to requiring certain energy demands.

Bluejacking

By Author – Rishabh Sontakke

 

What is Bluejacking?
Bluejacking is the sending of unsolicited messages over Bluetooth to Bluetooth-enabled devices such as mobile phones, PDAs or laptop computers, etc. Bluetooth has a very limited range; usually around 10 meters on mobile phones, but laptops can reach up to 100 meters with powerful transmitters.
Origin of Bluejacking-
This bluejack phenomenon started after a Malaysian IT consultant named Ajack posted a comment on a mobile phone forum. Ajack told IT Web that he used his Ericsson cellphone in a bank to send a message to someone with a Nokia 7650. Ajack did a Bluetooth discovery to see if there was another Bluetooth device around. Discovering a Nokia 7650 in the vicinity, he created a new contact and filled in the first name with Buy Ericsson! and sent a business card to the Nokia phone.
How to Bluejack:
Assuming that you now have a Bluetooth phone in your hands, the first thing to do is to make sure that Bluetooth is enabled. You will need to read the handbook of the particular phone (or PDA etc) that you have but somewhere in the Menu item, you will find the item that enables and disabled Bluetooth.Your phone or PDA will start to search the airwaves for other devices within range. If you are lucky you will see a list of them appear, or it will say that it cannot find any. If the latter happens then relocate to another crowd or wait a while and try again. If you have a list of found devices then let the fun begin.

The various steps involved –?in mobile

  1. First press the 5-way joystick down.
  2. Then choose options.
  3. Then choose “New contact”.
  4. Then in the first line choose your desired message.
  5. Then press done.
  6. Then go to the contact.
  7. Then press options.
  8. Then scroll down to send.
  9. Then choose “Via Bluetooth”Then press “Select”.
  10. Then the phone will be searching for enabled Devices

The various steps involved -?in computer/laptop

  1. Go to contacts in your Address Book program (e.g. Outlook).
  2. Create a new contact.
  3. Enter the message into one of the name fields.
  4. Save the new contact.
  5. Go to the address book.
  6. Right-click on the message/contact.
  7. Go to action.
  8. Go to Send to Bluetooth.
  9. Click on other.
  10. Select a device from the list and double-click on it.

Software Tools:

  • Bluespam: BlueSpam searches for all discoverable Bluetooth devices and sends a file to them (spams them) if they support OBEX.
  • Meeting point: Meeting point is the perfect tools to search for Bluetooth devices. Combine it with any bluejacking tools and have lots of fun. This software is compatible with Pocket PC, Windows.
  • Freejack: Freejack is compatible with java phone like Nokia N-series.
  • Easyjacking (eJack): Allows sending of text Messages to other Bluetooth enables devices.

Usage of Bluejacking:
Bluejacking can be used in many fields and for various purposes like in the busy shopping center, train station, high street,cinema,caf/restaurant/pub, etc. The main use of bluejacking tools or bluejacking is in advertising purpose and location-based purpose. Experimental results show that the system provides a viable solution for realizing permission-based mobile advertising.

Now, remember that Bluetooth works only for short range of distances, so you need to find the crowd. Bluejacking is very new so not everyone will have a Bluetooth phone or PDA(Personal digital assistant) so the bigger crowd the more likely you will find a victim on the train, in the cafe or standing in line are all good places to start
Bluejackers often look for the receiving phone toping or the user to react. In order to carry out bluejacking, the sending and receiving devices must be within10 meters of one another.
Code of Ethics-

  • Bluejackers will only send messages/pictures. They will never try to hack a device for the purpose of copying or modifying any files on any device or upload any executable files.
  • Any such messages or pictures sent will not be of an insulting, libelous or vulgar in nature and will be copyright free or copyrighted by the sender.
  • If no interest is shown by the recipient after 2 messages the bluejacker will desist and move on.
  • The bluejacker will restrict their activity to 10 messages maximum unless in exceptional circumstances
  • If the Bluejacker senses that he/she is causing distress rather than mirth to the recipient they will immediately decrease all activity towards them.
  • If a bluejacker is caught in the act he/she will be as co-operative as possible and not hide any details of their activity.

Related Concepts:

  1. BlueSnarfing: Bluesnarfing is the term associated with downloading any and all information from a hacked device. Bluesnarfing is the theft of information from a wireless device through a Bluetooth connection, often between phones, desktops, laptops, and PDAs. This allows access to a calendar, contact list, emails and text messages. Bluesnarfing is much more serious in relation to Bluejacking.
  1. Bluebugging: Bluebugging is a form of Bluetooth attack. In the progression of discovery date, Bluetooth attack started with bluejacking, then bluesnarfing, and then bluebugging. Bluebug program allows the user to take control of a victims phone to call the users phone. This means that the Bluebug user can simply listen to any conversation his victim is having in real life.

How to Prevent Being Bluejacked-
To prevent being Bluejacked, disable Bluetooth on the mobile device when not in use. The device will not show up on a Bluejackers phone when he/she attempts to send a message and they do not queue up.
Good Practices for Bluetooth enabled devices Whether someone is unwilling to partake in Bluejacking or just does not want to be bothered with these messages, the following are some good practices to consider:

  • Do not reveal an identity when either sending or receiving Bluejacked messages.
  • Never threaten anyone.
  • Never send messages that can be considered abusive.
  • Never reveal personal information in response to a Bluejacked message.
  • Disable Blue Tooth option if not in use in order to prevent Bluejacked messages.
  • If a Bluejacking message is received, delete the message voice accepting it or it will be added to the devices address book.

Warning:
Never try to hack a device for the purpose of copying or modifying any files on any device or upload any executable files. By hacking a device you are committing an offense under the computer misuse act 1990, which states it is an offense to obtain unauthorized access to any computer.
Conclusion:
Bluejacking is a technique by which we can interact with new people and has the ability to revolutionerisemarket by sending advertisement about the product, enterprise etc. on the Bluetooth configured the mobile phone so that the people get aware about them by seeing them on the phone.

Enterprise Resource Planning

By Author – Prankul Sinha

 

Introduction:

  • ERP is usually referred to as a category of business-management software that an organization can use to collect, store, manage and interpret data from these many business activities.
  • ERP provides a continuously updated view of core business processes using common databases maintained by a database management system. ERP systems track business resources?cash, raw materials, production capacity?and the status of business commitments: orders, purchase orders, and payroll. The applications that make up the system share data across various departments (manufacturing, purchasing, sales, accounting, etc.) that provide the data.
  • This system combines with various organization system which provides error-free transactions and production. It runs on a variety of computer hardware and network configurations, typically using a database as an information source.

 

Implementation:
Generally, three types of services are available to help implement such changes are consulting, customization, and support. Implementation time depends on business size, customization, the scope of process changes. Modular ERP systems can be implemented in stages. The typical project for a large enterprise takes about 14 months and requires around 150 consultants. Small projects can require months; multinational and other large implementations can take years. Customization can substantially increase implementation times.

Besides that, information processing influences various business functions e.g. some large corporations like Wal-Mart use a just in time inventory system. This reduces inventory storage and increases delivery efficiency, and requires up-to-date data. Before 2014, Walmart used a system called Inforem developed by IBM to manage replenishment.

 

Process preparation:

Implementing ERP typically requires changes in existing business processes. Poor understanding of needed process changes prior to starting implementation is the main reason for project failure. The difficulties could be related to the system, business process, infrastructure, training, or lack of motivation.

It is therefore crucial that organizations thoroughly analyze business processes before they implement ERP software. Analysis can identify opportunities for process modernization. It also enables an assessment of the alignment of current processes with those provided by the ERP system. Research indicates that risk of business process mismatch is decreased by:

  • Linking current processes to the organization’s strategy
  • Analyzing the effectiveness of each process.
  • Understanding existing automated solutions.

 

Customization:
ERP systems are theoretically based on industry best practices, and their makers intend that organizations deploy them as is. ERP vendors do offer customers configuration options that let organizations incorporate their own business rules, but gaps in features often remain even after configuration is complete. ERP customers have several options to reconcile feature gaps, each with their own pros/cons. Technical solutions include rewriting part of the delivered software, writing a homegrown module to work within the ERP system, or interfacing to an external system.

 

Advantages:
The most fundamental advantage of ERP is that the integration of myriad business processes saves time and expense. Management can make decisions faster and with fewer errors. Data becomes visible across the organization. Tasks that benefit from this integration include:

Sales forecasting, which allows inventory optimization.

  • Chronological history of every transaction through relevant data compilation in every area of operation.
  • Order tracking, from acceptance through fulfillment
  • Revenue tracking, from invoice through cash receipt
  • Matching purchase orders (what was ordered), inventory receipts (what arrived), and costing

 

Disadvantages:

Customization can be problematic. Compared to the best-of-breed approach, ERP can be seen as meeting an organizations lowest common denominator needs, forcing the organization to find workarounds to meet unique demands.

  • Re-engineering business processes to fit the ERP system may damage competitiveness or divert focus from other critical activities.
  • ERP can cost more than less integrated or less comprehensive solutions.
  • High ERP switching costs can increase the ERP vendor’s negotiating power, which can increase support, maintenance, and upgrade expenses.
  • Overcoming resistance to sharing sensitive information between departments can divert management attention.
  • Integration of truly independent businesses can create unnecessary dependencies.
  • Extensive training requirements take resources from daily operations.
  • Harmonization of ERP systems can be a mammoth task and requires a lot of time, planning, and money.

Attacks on Smart Cards

By Author – Samata Shelare

 

When hit by an APT attack, many companies implement smart cards and/or other two-factor authentication mechanisms as a reactionary measure. But thinking that these solutions will prevent credential theft is a big mistake. Attackers can bypass these protection mechanisms with clever techniques.

Nowadays, adversaries in the form of self-spreading malware or APT campaigns utilize Pass-the-Hash, a technique that allows them to escalate privileges in the domain. When Pass-the-Hash is not handy, they will use other techniques such as Pass-the-Ticket or Kerberoasting.

What makes smart cards so special?

A smart card is a piece of specialized cryptographic hardware that contains its own CPU, memory, and operating system. Smart cards are especially good at protecting cryptographic secrets, like private keys and digital certificates.

Smart cards may look like credit cards without the stripe, but they’re far more secure. They store their secrets until the right interfacing software accesses them in a predetermined manner, and the correct second factor PIN is provided. Smart cards often hold users’ personal digital certificates, which prove a user’s identity to an authentication requestor. Even better, smart cards rarely hand over the user’s private key. Instead, they provide the requesting authenticator “proof” that they have the correct private key.

After a company is subjected to a pass-the-hash attack, it often responds by jettisoning weak or easy password hashes. On many occasions, smart cards are the recommended solution, and everyone jumps on board. Because digital certificates aren’t hashes, most people think they’ve found the answer.

In this experiment, we will perform the four most common credential theft attacks on a domain-connected PC with both smart card and 2FA enabled.

  1. Clear text password theft
  2. Pass the hash attack
  3. Pass the ticket attack
  4. Process token manipulation attack
  5. Pass the Smart Card Hash
  6. A smart card is a piece of specialized cryptographic hardware that contains its CPU, memory, and operating system.

When authenticating a user with a smart card and PIN (Personal Identification Number) code in an Active Directory network (which is 90% of all networks), the Domain Controller returns an NTLM hash. The hash is calculated based on a randomly selected string. Presenting this hash to the DC identifies you as that user.

This hash can be reused and replayed without the need for the smart card. It is stored in the LSASS process inside the endpoint memory, and its easily readable by an adversary who has managed to compromise the endpoint using tools like Mimikatz, WCE, or even just dumping the memory of the LSASS process using the Task Manager. This hash exists in the memory because its crucial for single sign-on (SSO) support.

This is how smart card login works:

  • The user inserts his smart card and enters his own PIN in a login window.
  • The smart card subsystem authenticates the user as the owner of the smart card and retrieves the certificate from the card.
  • The smart card client sends the certificate to the KDC (Kerberos Key Distribution Center) on the DC.
  • The KDC verifies the Smart Card Login Certificate, retrieves the associated user of this certificate, and builds a Kerberos TGT for that user.
  • The KDC returns encrypted TGT back to the client.
  • The smart card decrypts the TGT and retrieves the NTLM hash from the negotiation.
  • Presenting only the TGT or the NTLM hash from now on will get you authenticated.

During standard login, the NTLM hash is calculated using the users password. Because the smart card doesnt contain a password, the hash is only calculated when you set the attribute to smart card required for interactive login. GPO can force users to change their passwords periodically. This feature exposes huge persistency security risk. Once the smart card users computer is compromised, an attacker can grab the hash generated from the smart card authentication. Now he has a hash with unlimited lifetime?and worse, lifetime persistency on your domain, because the hash will never change as long as Smart Card Logon, is forced on that user.

However, Microsoft offers a solution for the smart card persistence problem: they will rotate the hashes of your smart card account every 60 days. But it is only applicable if your Domain functionality level is Windows Server 2016.

Smart cards cant protect against Pass-the-Hash and their hash ever changes.

Pass-The-2FA Hash

During authentication with some third-party 2FA, the hash is calculated from the users managed password. And because the password is managed, it is changed frequently and sometimes even immediately.

In some cases, 2FA managed to mitigate Pass-the-Hash attempts because the hash was calculated using the OTP (one-time password). Therefore, the hash wont be valid anymore, and the adversary who stole it wont be able to authenticate with it.

Other vendors like AuthLite mitigated Pass-The-Hash attempts because the cached hash of 2F sessions is manipulated by AuthLite, so stealing the hash in the memory is useless. Theres still additional verification in the DC, and the OTP must be forwarded to AuthLite before authenticating as a 2FA token.

Depending on the 2FA solution you have, you probably wont be able to Pass-the-Hash.

With their embedded microchip technology and the secure authentication they can provide, smart cards or hardware tokens have been relied upon to give physical access and the go-ahead for data transfer in a multitude of applications and transactions, in the public, corporate, and government sectors.

But, robust as they are, smart cards do have weaknesses ? and intelligent hackers have developed a variety of techniques for observing and blocking their operations, so as to gain access to credentials, information, and funds. In this article, we will take a closer look at the technology and how it is being used for Smart Card Attacks.

Smart Communications

Small information packets called Application Protocol Data Units (APDUs) are the basis of communication between a Card Accepting Device (CAD) and a smart card ? which may take the form of a standard credit card-sized unit, the SIM card for a smartphone, or a USB dongle.

Data travels between the smart card and CAD in one direction at a time, and both objects use an authentication protocol to identify each other. A random number generated by the card is sent to the CAD, which uses a shared encryption key to scramble the digits before sending the number back. The card compares this returned figure with its own encryption, and a reverse process occurs as the communication exchange continues.

Each message between the two is authenticated by a special code ? a figure based on the message content, a random number, and an encryption key. Symmetric DES (Data Encryption Standard), 3DES (triple DES) and public key RSA (Rivest-Shamir-Adleman algorithm) are the encryption methods most commonly used.

Secure, generally ? but hackers using brute force methods are capable of breaking each of these encryptions down, with enough time and sufficiently powerful hardware.

OS-Level Protection

Smart card operating systems organize their data into a three-level hierarchy. At the top, the root or Master File (MF) may hold several Dedicated Files (DFs: analogous to directories or folders) and Elementary Files (EFs: like regular files on a computer). But DFs can also hold files, and all three levels use headers which spell out their security attributes and user privileges. Applications may only move to a position on the OS if they have the relevant access rights.

Personal Identification Numbers (PINs) are associated with the Cardholder verification 1 (CHV1) and Cardholder verification 2 (CHV2) levels of access, which correspond to the user PIN allocated to a cardholder and the unblocking code needed to re-enable a compromised card.

The operating system blocks a card after an incorrect PIN is entered a certain number of times. This condition applies to both the user PIN and the unblocking code ? and while it provides a measure of security against fraud for the cardholder, it also provides malicious intruders with an opportunity for sabotage and locking a user out of their own accounts.

Host-Based Security

Systems and networks using host-based security deploy smart cards as simple carrier of information. Data on the cards may be encrypted, but the protection of it is the responsibility of the host system ? and information may be vulnerable as its being transmitted between card and computer.

Employing smart memory cards with a password mechanism that prevents unauthorized reading offers some additional protection, but passwords can still become accessible to hackers as unencrypted transmissions move between the card and the host.

Card-Based Security

Systems with the card or token-based security treat smart cards with microprocessors as independent computing devices. During interactions between cards and the host system, user identities can be authenticated and a staged protocol put in place to ensure that each card has clearance to access the system.

Access to data on the card is controlled by its own operating system, and any pre-configured permissions set by the organization issuing the card. So for hackers, the target for infiltration or sabotage becomes the smart card itself, or some breach of the issuing body which may affect the condition of cards issued in the future.

Physical Vulnerabilities

For hackers, gaining physical access to the embedded microchip on a smart card is a comparatively straightforward process.

Physical tampering is an invasive technique that begins with removing the chip from the surface of the plastic card. Its a simple enough matter of cutting away the plastic behind the chip module with a sharp knife until the epoxy resin binding it to the card becomes visible. This resin can then be dissolved with a few drops of fuming nitric acid, shaking the card in acetone until the resin and acid are washed away.

The attacker may then use an optical microscope with camera attachment to take a series of high-resolution shots of the microchip surface. Analysis of these photos can reveal the patterns of metal lines tracing the cards data and memory bus pathways. Their goal will be to identify those lines that need to be reproduced in order to gain access to the memory values controlling the specific data sets they are looking for.

 

Request a Free Estimate
Enter Your Information below and we will get back to you with an estimate within few hours
0