Official Blog

Indian Regional Navigation Satellite System

By Author – Samata Shelare

 

India is looking forward to starting its own Navigational system. At present most of the countries are dependent on Americas Global Positioning System (GPS). India is all set to have its own navigational system named Indian Regional Navigation Satellite System (IRNSS). IRNSS is going to be fully functional by mid of 2016. IRNSS is designed to provide accurate position information throughout India. It also covers 1500 km of the region around the boundary of India. IRNSS would have 7 satellites, out of which 4 are already placed in orbit. The fully deployed IRNSS system consists of 3 satellites in GEO orbit and 4 satellites in GSO orbit, approximately 36,000 km altitude above the earth surface.

  1. Indian Regional Navigation Satellite System or IRNSS (NavIC) is designed to provide accurate real-time positioning and timing services to users in India as well as the region extending up to 1,500 km from its boundary.
  2. It is an independent regional navigation satellite system developed by India on par with US-based GPS.
  3. NavIC provides two types of services:
    • Standard positioning service – This is meant for all users.
    • Restricted service – Encrypted service which is provided only to authorized users like military and security agencies.
  4. Applications of IRNSS:
    • Terrestrial, aerial and marine navigation
    • Disaster management
    • Vehicle tracking and fleet management
    • Precise timing mapping and geodetic data capture
    • Terrestrial navigation aid for hikers and travelers
    • Visual and voice navigation for drivers
  5. Operational Mechanism
    While American GPS has 24 satellites in orbit, the number of sats visible to the ground receiver is limited.In IRNSS, four satellites are always in geosynchronous orbits.Hence, each satellite is always visible to a receiver in a region 1,500 km around India
  6. Navigation Constellation
    It consists of seven satellites: three in geostationary earth orbit (GEO) and four in geosynchronous orbit (GSO) inclined at 29 degrees to the equator.
  7. Each sat has three rubidium atomic clocks, which provide accurate locational data.
  8. The first naval, IRNSS-1A, was launched on July 1, 2013, and seventh of the series (last one) was launched on April 28, 2016.
  9. Though desi navigation system is operational, its services are not yet ready for commercial purpose.

This is because the chipset required for wireless devices like the cell phone to access navigation services is still being developed by Isro and is yet to hit the market.

The four deployed satellites are IRNSS-1A, IRNSS-1B, IRNSS-1C, IRNSS-1D. Further IRNSS-E is planned to be launched by January and IRNSS-F, G by March 2016.

IRNSS will provide two types of services, namely, Standard Positioning Service (SPS) which is provided to all the users and Restricted Service (RS), which is an encrypted service provided only to the authorized users.
ISRO is recommending a small additional hardware for handheld devices that can receive S-Band signals from IRNSS satellites and inclusion of a code in the phone software to receive L-Band signals.
Senior ISRO official said that ? both these L and S-band signals received from seven satellite constellation of the IRNSS are being calculated by a special embedded software which reduces the errors caused by atmospheric disturbances significantly. This, in turn, gives a superior quality location accuracy than the American GPS system.
At present, only Americas GPS and Russias GLONASS (GLObal NAvigation Satellite System) are independent and fully functional navigational systems. India will be the third country to have its own navigational system.
The main advantage of Indian own navigational system is that India wont be dependent on USs GP System for defense operations. India had no options till now. During Kargil war, Indian Army and Airforce had to use GPS. The information related to security operations are very confidential and should not be shared with anyone.

Hydrogen: Future Fuel

By Author – Rishabh Sontakke

 

Hydrogen fuel is a zero-emission fuel when burned with oxygen. It uses electrochemical cells, or combustion in internal engines, to power vehicles and electric devices. It is also used in the propulsion of spacecraft and might potentially be mass-produced and commercialized for passenger vehicles and aircraft.
Hydrogen lies in the first group and first period in the periodic table, i.e. it is the first element on the periodic table, making it the lightest element. Since hydrogen gas is so light, it rises in the atmosphere and is therefore rarely found in its pure form, H2. In a flame of pure hydrogen gas, burning in air, the hydrogen (H2) reacts with oxygen (O2) to form water (H2O) and releases energy.
2H2(g) + O2(g) 2H2O(g) + energy
If carried out in the atmospheric air instead of pure oxygen, as is usually the case, hydrogen combustion may yield small amounts of nitrogen oxides, along with the water vapor.
The energy released enables hydrogen to act as a fuel. In an electrochemical cell, that energy can be used with relatively high efficiency. If it is simply used for heat, the usual thermodynamics limits the thermal efficiency.
Since there is very little free hydrogen gas, hydrogen is in practice only as an energy carrier, like electricity, not an energy resource. Hydrogen gas must be produced, and that production always requires more energy than can be retrieved from the gas as a fuel later on. This is a limitation of the physical law of the conservation of energy. Most hydrogen production induces environmental impacts.

Hydrogen Production:

Because pure hydrogen does not occur naturally on Earth in large quantities, it takes a substantial amount of energy in its industrial production. There are different ways to produce it, such as electrolysis and steam-methane reforming process.
Electrolysis and steam-methane reforming process ?
In electrolysis, electricity is run through water to separate the hydrogen and oxygen atoms. This method can use wind, solar, geothermal, hydro, fossil fuels, biomass, nuclear, and many other energy sources. Obtaining hydrogen from this process is being studied as a viable way to produce it domestically at a low cost. Steam-methane reforming, the current leading technology for producing hydrogen in large quantities, extracts the hydrogen from methane. However, this reaction causes a side production of carbon dioxide and carbon monoxide, which are greenhouse gases and contribute to global warming.

Energy:

Hydrogen is locked up in enormous quantities in water, hydrocarbons, and other organic matter. One of the challenges of using hydrogen as a fuel comes from being able to efficiently extract hydrogen from these compounds. Currently, steam reforming, or combining high-temperature steam with natural gas, accounts for the majority of the hydrogen produced. Hydrogen can also be produced from water through electrolysis, but this method is much more energy demanding. Once extracted, hydrogen is an energy carrier (i.e. a store for energy first generated by other means). The energy can be delivered to fuel cells and generate electricity and heat or burned to run a combustion engine. In each case, hydrogen is combined with oxygen to form water. The heat in a hydrogen flame is a radiant emission from the newly formed water molecules. The water molecules are in an excited state on the initial formation and then transition to a ground state; the transition unleashing thermal radiation. When burning in air, the temperature is roughly 2000 C. Historically, carbon has been the most practical carrier of energy, as more energy is packed in fossil fuels than pure liquid hydrogen of the same volume. The carbon atoms have classic storage capabilities and releases even more energy when burned with hydrogen. However, burning carbon-based fuel and releasing its exhaust contributes to global warming due to the greenhouse effect of carbon gases. Pure hydrogen is the smallest element and some of it will inevitably escape from any known container or pipe in micro amounts, yet simple ventilation could prevent such leakage from ever reaching the volatile 4% hydrogen-air mixture. So long as the product is in a gaseous or liquid state, pipes are a classic and very efficient form of transportation. Pure hydrogen, though, causes the metal to become brittle, suggesting metal pipes may not be ideal for hydrogen transport.

Uses:

Hydrogen fuel can provide motive power for liquid-propellant rockets, cars, boats and airplanes, portable fuel cell applications or stationary fuel cell applications, which can power an electric motor. The problems of using hydrogen fuel in cars arise from the fact that hydrogen is difficult to store in either a high-pressure tank or a cryogenic tank
An alternative fuel must be technically feasible, economically viable, easily convert to another energy form when combusted, be safe to use, and be potentially harmless to the environment. Hydrogen is the most abundant element on earth. Although hydrogen does not exist freely in nature, it can be produced from a variety of sources such as steam reformation of natural gas, gasification of coal, and electrolysis of water. Hydrogen gas can use in traditional gasoline-powered internal combustion engines (ICE) with minimal conversions. However, vehicles with polymer electrolyte membrane (PEM) fuel cells provide a greater efficiency. Hydrogen gas combusts with oxygen to produce water vapor. Even the production of hydrogen gas can be emissions-free with the use of renewable energy sources. The current price of hydrogen is about $4 per kg, which is about the equivalent of a gallon of gasoline.

However, in fuel cell vehicles, such as the 2009 Honda FCX Clarity, 1 kg provides about 68 miles of travel. Of course, the price range is currently very high. Ongoing research and implementation of a hydrogen economy are required to make this fuel economically feasible. The current focus is directed toward hydrogen is a clean alternative fuel that produces insignificant greenhouse gas emissions. If hydrogen is the next transportation fuel, the primary energy source used to produce the vast amounts of hydrogen will not necessarily be a renewable, clean source. Carbon sequestration is referenced frequently as a means to eliminate CO2 emissions from the burning of coal, where the gases are captured and sequestered in gas wells or depleted oil wells. However, the availability of these sites is not widespread and the presence of CO2 may acidify groundwater. Storage and transport is a major issue due to hydrogens low density.

Is the investment in new infrastructure too costly?

Can our old infrastructure currently used for natural gas transport be retrofitted for hydrogen?

The burning of coal and nuclear fission are the main energy sources that will be used to provide an abundant supply of hydrogen fuel.

How does this process help our current global warming predicament? The U.S. Department of Energy has recently funded a research project to produce hydrogen from coal at large-scale facilities, with carbon sequestration in mind.

Is this the wrong approach? Should there be more focus on other forms of energy that produce no greenhouse gas emissions? If the damage to the environment is interpreted as a monetary cost, the promotion of energy sources such as wind and solar may prove to be a more economical approach.

The possibility of a hydrogen economy that incorporates the use of hydrogen into every aspect of transportation requires much further research and development. The most economical and major source of hydrogen in the US is the steam reformation of natural gas, a nonrenewable resource and a producer of greenhouse gases. The electrolysis of water is a potentially sustainable method of producing hydrogen, but only if renewable energy sources are used for the electricity. Today, less than 5% of our electricity comes from renewable sources such as solar, wind, and hydro. Nuclear power may be considered as a renewable resource to some, but the waste generated by this energy source becomes a major problem. A rapid shift toward renewable energy sources is required before this proposed hydrogen economy can prove itself. Solar photovoltaic (PV) systems are the current focus of my research related to the energy source required for electrolysis of water. One project conducted at the GM Proving Ground in Milford, MI employed the use of 40 solar PV modules directly connected to an electrolyzer/storage/dispenser system. The result was an 8.5% efficiency in the production of hydrogen, with an average production of 0.5 kg of high-pressure hydrogen per day. Research similar to this may result in the optimization of the solar hydrogen energy system. Furthermore, the infrastructure for a hydrogen economy will come with high capital costs. The transport of hydrogen through underground pipes seems to be the most economical when demand grows enough to require a large centralized facility. However, in places of low population density, this method may not be economically feasible. The project mentioned earlier may become an option for individuals to produce their own hydrogen gas at home, with solar panels lining their roof. A drastic change is needed to slow down the effects of our fossil fuel dependent society. Conservation can indeed help, but the lifestyles we are accustomed to requiring certain energy demands.

Li-Fi Technology

By Author – Rashmita Soge

 

Li-Fi is a technology for wireless communication between devices using light to transmit data. In its present state only LED lamps can be used for the transmission of visible light. LiFi is designed to use LED light bulbs similar to those currently in use in many energy-conscious homes and offices. However, LiFi bulbs are outfitted with a chip that modulates the light imperceptibly for optical data transmission. LiFi data is transmitted by the LED bulbs and received by photoreceptors. Li-Fi’s early developmental models were capable of 150 megabits-per-second (Mbps). Some commercial kits enabling that speed have been released. In the lab, with stronger LEDs and different technology, researchers have enabled 10 gigabits-per-second (Gbps), which is faster than 802.11ad. The term was first introduced by Harald Haas during a 2011 TED Global talk in Edinburgh. In technical terms, Li-Fi is a visible light communications system that is capable of transmitting data at high speeds over the visible light spectrum, ultraviolet, and infrared radiation. In terms of its end use the technology is similar to Wi-Fi. The key technical difference is that Wi-Fi uses radio frequency to transmit data. Using light to transmit data allows Li-Fi to offer several advantages like working across higher bandwidth, working in areas susceptible to electromagnetic interference (e.g. aircraft cabins, hospitals) and offering higher transmission speeds. The technology is actively being developed by several organizations across the globe.

Benefits of LiFi:-

  • Higher speeds than Wi-Fi.
  • 10000 times the frequency spectrum of radio.
  • More secure because data cannot be intercepted without a clear line of sight.
  • Prevents piggybacking.
  • Eliminates neighboring network interference.
  • Unimpeded by radio interference.
  • Does not create interference in sensitive electronics, making it better for use in environments like hospitals and aircraft.

By using LiFi in all the lights in and around a building, the technology could enable greater area of coverage than a single WiFi router. Drawbacks to the technology include the need for a clear line of sight, difficulties with mobility and the requirement that lights stay on for operation.

All the existing wireless technologies utilize different frequencies on the electromagnetic spectrum. While Wi-Fi uses radio waves, Li-Fi hitches information through visible light communication. Given this, the latter requires a photo-detector to receive light signals and a processor to convert the data into streamable content. As a result, the semiconductor nature of LED light bulbs makes them a feasible source of high-speed wireless communication.

So, how does it work? Lets look at the working of Li-Fi:-

When a constant current source is applied to an LED bulb, it emits a constant stream of photons observed as visible light. When this current is varied slowly, the bulb dims up and down. As these LED bulbs are the semiconductor, the current and optical output can be modulated at extremely high speeds that can be detected by a photo-detector device and converted back to electrical current.

The intensity modulation is too quick to be perceived with the human eye and hence the communication seems to be seamless just like RF. So, the technique can help in transmitting high-speed information from an LED light bulb. However, its much simpler, unlike RF communication which requires radio circuits, antennas, and complex receivers.

Li-Fi uses direct modulation methods similar used in low-cost infrared communications devices like remote control units. Moreover, infra-red communication has limited powers due to safety requirements while LED bulbs have intensities high enough to achieve very large data rates.

 

Wi-Fi vs Li-Fi:-

Now that we know what Li-Fi is and how it works, the question is where it stands when compared to Wi-Fi. In order to get an understanding as to which one is superior, lets take a look at certain aspects of both the technologies:-

  • Speed:- Li-Fi can possibly deliver data transfer speeds of 224 gigabits per second which clearly- leaves Wi-Fi far behind. As per the tests conducted by pure LiFi, the technology produced over 100 Gbps in a controlled environment. Moreover, the visible light spectrum is 1,000 times larger than the 300 GHz of RF spectrum which helps in gaining high speed.
  • Energy Efficiency:- Usually, Wi-Fi needs two radios to communicate back and forth which takes a lot of energy to discern the signal from the noise as there may be several devices using the same frequency. Each device has an RF transmitter and baseband chip for enabling communication. However, as Li-Fi uses LED lights, the transmission requires minimal additional power for enabling communication.
  • Security:- One of the main differences between Wi-Fi and Li-Fi is that the former has a wider range (typically 32 meters) and can even be accessed throughout different portions of a building, however, the latter cant penetrate through walls and ceilings and hence its more secure. Although that would mean fitting a separate LED bulb in all the rooms, yet the technology can be ideal for sensitive operations like R&D, Defense, Banks, etc. So, in a way, its not subject to remote piracy and hacking as opposed to Wi-Fi.
  • Data Density:- Owing to the interference issues, Wi-Fi works in a less dense environment while Li-Fi works in a highly dense environment. The area covered by one Wi-Fi access point has 10s or 100s of lights and each LiFi light can deliver the same speed or greater than a Wi-Fi access point. Therefore, in the same area, LiFi can provide 10, or 100, or 1000 times greater wireless capacity.

Future Scope:-

Li-Fi provides a great platform to explore the grounds of transmission of wireless data at high rates. If this technology is put into practical use, each light bulb installed is potential and can be used as a Wi-Fi hotspot to transmit data in a cleaner, greener and safer manner. The applications of Li-Fi are beyond imagination at the moment. With this enhanced technology, people can access wireless data with the LEDs installed on the go at very high rates. It resolves the problem of shortage of radio frequency bandwidth. In various military applications, where RF-based communications are not allowed, Li-Fi could be a viable alternative to securely pass data at high rates to other military vehicles. Also, LEDs can be used effectively to carry out VLC in many hospital applications where RF-based communications could be potentially dangerous. Since light cannot penetrate through walls, it could be a limitation to this technology. Nevertheless, given its high rates of data transmission and applications in multiple fields, Li-Fi is definitely the future of wireless communication.

Google Driverless Car

By Author – Rashmita Soge

 

Introduction to Car:

The Google Driverless Car is like any car, but:

  • It can steer itself while looking out for obstacles.
  • It can accelerate itself to the correct speed limit.
  • It can stop and go itself based on any traffic condition.
  • It can take its passengers anywhere it wants to go safely, legally, and comfortably.

What is Google Driverless Car?

A driverless car (sometimes called a self-driving car, an automated car or an autonomous vehicle) is a robotic vehicle that is designed to travel between destinations without a human operator. To qualify as fully autonomous, a vehicle must be able to navigate without human intervention to a predetermined destination over roads that have not been adapted for its use.

Components

Integrates Google Maps with various hardware sensors and artificial intelligence software

Google Maps:-

  • Provides the car with road information

Hardware Sensors:-

  • Provides the car with real-time environment conditions

Artificial Intelligence:-

  • Provides the car with real-time decisions

Brief History of the car -?

The origins of automated cars go back to the 1920s. The technology significantly advanced in the 1950s, but it wasn’t until the 1980s with the introduction of computers that truly autonomous vehicles began to become a possibility. Mercedes-Benz, General Motors, Bosch, Nissan, Renault, Toyota, the University of Parma, Oxford University and Google have all developed prototype vehicles since then.

Google’s self-driving car project was formerly led by Sebastian Thrun, former director of the Stanford Artificial Intelligence Laboratory and co-inventor of Google Street View. Thrun’s team at Stanford created the robotic vehicle Stanley which won the 2005 DARPA Grand Challenge and its US$2 million prizes from the United States Department of Defense.The team developing the system consisted of 15 engineers working for Google, including Chris Urmson, Mike Montemerlo, and Anthony Levandowski who had worked on the DARPA Grand and Urban Challenges.

Heres how Googles cars work

  • The driver sets a destination. The cars software calculates a route and starts the car on its way.
  • A rotating, roof-mounted LIDAR (Light Detection and Ranging – a technology similar to radar) sensor monitors a 60-meter range around the car and creates a dynamic 3-D map of the cars current environment.
  • A sensor on the left rear wheel monitors sideways movement to detect the cars position relative to the 3-D map.
  • Radar systems in the front and rear bumpers calculate distances to obstacles.
  • Artificial intelligence (AI) software in the car is connected to all the sensors and has input from Google Street View and video cameras inside the car.
  • The AI simulates human perceptual and decision-making processes and controls actions in driver-control systems such as steering and brakes.
  • The cars software consults Google Maps for advance notice of things like landmarks and traffic signs and lights.
  • An override function is available to allow a human to take control of the vehicle.

Proponents of systems based on driverless cars say they would eliminate accidents caused by driver error, which is currently the cause of almost all traffic accidents. Furthermore, the greater precision of an automatic system could improve traffic flow, dramatically increase highway capacity and reduce or eliminate traffic jams. Finally, the systems would allow commuters to do other things while traveling, such as working, reading or sleeping.

Technology:

The Waymo project team has equipped a number of different types of cars with the self-driving equipment, including the Toyota Prius, Audi TT, Fiat Chrysler Pacifica and Lexus RX450h.Google has also developed their own custom vehicle, which is assembled by Roush Enterprises and uses equipment from Bosch, ZF Lenksysteme, LG, and Continental.

As of June 2014, the system works with a very high definition inch-precision map of the area the vehicle is expected to use, including how high the traffic lights are; in addition to onboard systems, some computation is performed on remote computer farms.

In May 2016, Google and Fiat Chrysler Automobiles announced an order of 100 Pacifica hybrid minivans to test the technology on.

Google’s robotic cars have about $150,000 in equipment including a $70,000 LIDAR system.The rangefinder mounted on the top is a Velodyne 64-beam laser. This laser allows the vehicle to generate a detailed 3D map of its environment. The car then takes these generated maps and combines them with high-resolution maps of the world, producing different types of data models that allow it to drive itself.

In 2017, announced a partnership with Intel to develop autonomous driving technology together and develop better processing.

Advantages:

  • Without the need for a driver, cars could become mini-leisure rooms. There would be more space and no need for everyone to face forwards. Entertainment technology, such as video screens, could be used to lighten long journeys without the concern of distracting the driver.
  • Over 80% of car crashes in the USA are caused by driver error. There would be no bad drivers and fewer mistakes on the roads if all vehicles became driverless. Drunk and drugged drivers would also be a thing of the past.
  • Travelers would be able to journey overnight and sleep for the duration.
  • Traffic could be coordinated more easily in urban areas to prevent long tailbacks at busy times. Commute times could be reduced drastically.
  • Reduced or non-existent fatigue from driving, plus arguments over directions and navigation would be a thing of the past.
  • Sensory technology could potentially perceive the environment better than human senses, seeing farther ahead, better in poor visibility, detecting smaller and more subtle obstacles, more reasons for fewer traffic accidents.
  • Speed limits could be increased to reflect the safer driving, shortening journey times.
  • Parking the vehicle and difficult maneuvering would be less stressful and require no special skills. The car could even just drop you off and then go and park itself.
  • People who historically have difficulties with driving, such as disabled people and older citizens, as well as the very young, would be able to experience the freedom of car travel. There would be no need for drivers’ licenses or driving tests.
  • Autonomous vehicles could bring about a massive reduction in insurance premiums for car owners.
  • Efficient travel also means fuel savings, cutting costs.
  • Reduced need for safety gaps means that road capacities for vehicles would be significantly increased.
  • Passengers should experience a smoother riding experience.
  • Self-aware cars would lead to a reduction in car theft.

Future of Car:

  • No drivers’ licenses will be needed. Since people of all ages and abilities can use these vehicles, no specific driver certifications are needed. “People do not need a license to sit on a train or bus,” said Dr. Azim Eskandarian, director of the Center For Intelligent Systems Research. ” … So there will not be any special requirements for drivers or occupants to use the vehicle as a form of transportation.”
  • Car-sharing programs will become more mainstream. They will take you to your destination and then be ready for another occupant. “Since cars today are parked for more than 90 percent of their lifetime, shred car services will promote more continuous movement, garner more efficient operation and use less gas,” said Dr. Alberto Broggi, IEEE senior member.
  • Infrastructure won’t be prohibitive. Existing roads can already handle the advent of autonomous vehicles. No major overhaul is needed. Broggi directed a project in 2010 that led two driverless cars to complete an 8,000-mile trip between Italy and Shanghai.
  • Say farewell to red lights and stop signs. Once cars are driverless, intersections will be equipped with sensors, cameras, and radar that controls traffic flow. That will not only end collisions but promote a fuel-efficient flow of traffic.
  • High-Occupancy Vehicle lanes might be replaced by Driverless Car lanes, which would not only promote autonomous travel but help driverless cars travel both more safely and faster, reaching speeds of perhaps 100 mph by 2040.

Bluejacking

By Author – Rishabh Sontakke

 

What is Bluejacking?
Bluejacking is the sending of unsolicited messages over Bluetooth to Bluetooth-enabled devices such as mobile phones, PDAs or laptop computers, etc. Bluetooth has a very limited range; usually around 10 meters on mobile phones, but laptops can reach up to 100 meters with powerful transmitters.
Origin of Bluejacking-
This bluejack phenomenon started after a Malaysian IT consultant named Ajack posted a comment on a mobile phone forum. Ajack told IT Web that he used his Ericsson cellphone in a bank to send a message to someone with a Nokia 7650. Ajack did a Bluetooth discovery to see if there was another Bluetooth device around. Discovering a Nokia 7650 in the vicinity, he created a new contact and filled in the first name with Buy Ericsson! and sent a business card to the Nokia phone.
How to Bluejack:
Assuming that you now have a Bluetooth phone in your hands, the first thing to do is to make sure that Bluetooth is enabled. You will need to read the handbook of the particular phone (or PDA etc) that you have but somewhere in the Menu item, you will find the item that enables and disabled Bluetooth.Your phone or PDA will start to search the airwaves for other devices within range. If you are lucky you will see a list of them appear, or it will say that it cannot find any. If the latter happens then relocate to another crowd or wait a while and try again. If you have a list of found devices then let the fun begin.

The various steps involved –?in mobile

  1. First press the 5-way joystick down.
  2. Then choose options.
  3. Then choose “New contact”.
  4. Then in the first line choose your desired message.
  5. Then press done.
  6. Then go to the contact.
  7. Then press options.
  8. Then scroll down to send.
  9. Then choose “Via Bluetooth”Then press “Select”.
  10. Then the phone will be searching for enabled Devices

The various steps involved -?in computer/laptop

  1. Go to contacts in your Address Book program (e.g. Outlook).
  2. Create a new contact.
  3. Enter the message into one of the name fields.
  4. Save the new contact.
  5. Go to the address book.
  6. Right-click on the message/contact.
  7. Go to action.
  8. Go to Send to Bluetooth.
  9. Click on other.
  10. Select a device from the list and double-click on it.

Software Tools:

  • Bluespam: BlueSpam searches for all discoverable Bluetooth devices and sends a file to them (spams them) if they support OBEX.
  • Meeting point: Meeting point is the perfect tools to search for Bluetooth devices. Combine it with any bluejacking tools and have lots of fun. This software is compatible with Pocket PC, Windows.
  • Freejack: Freejack is compatible with java phone like Nokia N-series.
  • Easyjacking (eJack): Allows sending of text Messages to other Bluetooth enables devices.

Usage of Bluejacking:
Bluejacking can be used in many fields and for various purposes like in the busy shopping center, train station, high street,cinema,caf/restaurant/pub, etc. The main use of bluejacking tools or bluejacking is in advertising purpose and location-based purpose. Experimental results show that the system provides a viable solution for realizing permission-based mobile advertising.

Now, remember that Bluetooth works only for short range of distances, so you need to find the crowd. Bluejacking is very new so not everyone will have a Bluetooth phone or PDA(Personal digital assistant) so the bigger crowd the more likely you will find a victim on the train, in the cafe or standing in line are all good places to start
Bluejackers often look for the receiving phone toping or the user to react. In order to carry out bluejacking, the sending and receiving devices must be within10 meters of one another.
Code of Ethics-

  • Bluejackers will only send messages/pictures. They will never try to hack a device for the purpose of copying or modifying any files on any device or upload any executable files.
  • Any such messages or pictures sent will not be of an insulting, libelous or vulgar in nature and will be copyright free or copyrighted by the sender.
  • If no interest is shown by the recipient after 2 messages the bluejacker will desist and move on.
  • The bluejacker will restrict their activity to 10 messages maximum unless in exceptional circumstances
  • If the Bluejacker senses that he/she is causing distress rather than mirth to the recipient they will immediately decrease all activity towards them.
  • If a bluejacker is caught in the act he/she will be as co-operative as possible and not hide any details of their activity.

Related Concepts:

  1. BlueSnarfing: Bluesnarfing is the term associated with downloading any and all information from a hacked device. Bluesnarfing is the theft of information from a wireless device through a Bluetooth connection, often between phones, desktops, laptops, and PDAs. This allows access to a calendar, contact list, emails and text messages. Bluesnarfing is much more serious in relation to Bluejacking.
  1. Bluebugging: Bluebugging is a form of Bluetooth attack. In the progression of discovery date, Bluetooth attack started with bluejacking, then bluesnarfing, and then bluebugging. Bluebug program allows the user to take control of a victims phone to call the users phone. This means that the Bluebug user can simply listen to any conversation his victim is having in real life.

How to Prevent Being Bluejacked-
To prevent being Bluejacked, disable Bluetooth on the mobile device when not in use. The device will not show up on a Bluejackers phone when he/she attempts to send a message and they do not queue up.
Good Practices for Bluetooth enabled devices Whether someone is unwilling to partake in Bluejacking or just does not want to be bothered with these messages, the following are some good practices to consider:

  • Do not reveal an identity when either sending or receiving Bluejacked messages.
  • Never threaten anyone.
  • Never send messages that can be considered abusive.
  • Never reveal personal information in response to a Bluejacked message.
  • Disable Blue Tooth option if not in use in order to prevent Bluejacked messages.
  • If a Bluejacking message is received, delete the message voice accepting it or it will be added to the devices address book.

Warning:
Never try to hack a device for the purpose of copying or modifying any files on any device or upload any executable files. By hacking a device you are committing an offense under the computer misuse act 1990, which states it is an offense to obtain unauthorized access to any computer.
Conclusion:
Bluejacking is a technique by which we can interact with new people and has the ability to revolutionerisemarket by sending advertisement about the product, enterprise etc. on the Bluetooth configured the mobile phone so that the people get aware about them by seeing them on the phone.

Enterprise Resource Planning

By Author – Prankul Sinha

 

Introduction:

  • ERP is usually referred to as a category of business-management software that an organization can use to collect, store, manage and interpret data from these many business activities.
  • ERP provides a continuously updated view of core business processes using common databases maintained by a database management system. ERP systems track business resources?cash, raw materials, production capacity?and the status of business commitments: orders, purchase orders, and payroll. The applications that make up the system share data across various departments (manufacturing, purchasing, sales, accounting, etc.) that provide the data.
  • This system combines with various organization system which provides error-free transactions and production. It runs on a variety of computer hardware and network configurations, typically using a database as an information source.

 

Implementation:
Generally, three types of services are available to help implement such changes are consulting, customization, and support. Implementation time depends on business size, customization, the scope of process changes. Modular ERP systems can be implemented in stages. The typical project for a large enterprise takes about 14 months and requires around 150 consultants. Small projects can require months; multinational and other large implementations can take years. Customization can substantially increase implementation times.

Besides that, information processing influences various business functions e.g. some large corporations like Wal-Mart use a just in time inventory system. This reduces inventory storage and increases delivery efficiency, and requires up-to-date data. Before 2014, Walmart used a system called Inforem developed by IBM to manage replenishment.

 

Process preparation:

Implementing ERP typically requires changes in existing business processes. Poor understanding of needed process changes prior to starting implementation is the main reason for project failure. The difficulties could be related to the system, business process, infrastructure, training, or lack of motivation.

It is therefore crucial that organizations thoroughly analyze business processes before they implement ERP software. Analysis can identify opportunities for process modernization. It also enables an assessment of the alignment of current processes with those provided by the ERP system. Research indicates that risk of business process mismatch is decreased by:

  • Linking current processes to the organization’s strategy
  • Analyzing the effectiveness of each process.
  • Understanding existing automated solutions.

 

Customization:
ERP systems are theoretically based on industry best practices, and their makers intend that organizations deploy them as is. ERP vendors do offer customers configuration options that let organizations incorporate their own business rules, but gaps in features often remain even after configuration is complete. ERP customers have several options to reconcile feature gaps, each with their own pros/cons. Technical solutions include rewriting part of the delivered software, writing a homegrown module to work within the ERP system, or interfacing to an external system.

 

Advantages:
The most fundamental advantage of ERP is that the integration of myriad business processes saves time and expense. Management can make decisions faster and with fewer errors. Data becomes visible across the organization. Tasks that benefit from this integration include:

Sales forecasting, which allows inventory optimization.

  • Chronological history of every transaction through relevant data compilation in every area of operation.
  • Order tracking, from acceptance through fulfillment
  • Revenue tracking, from invoice through cash receipt
  • Matching purchase orders (what was ordered), inventory receipts (what arrived), and costing

 

Disadvantages:

Customization can be problematic. Compared to the best-of-breed approach, ERP can be seen as meeting an organizations lowest common denominator needs, forcing the organization to find workarounds to meet unique demands.

  • Re-engineering business processes to fit the ERP system may damage competitiveness or divert focus from other critical activities.
  • ERP can cost more than less integrated or less comprehensive solutions.
  • High ERP switching costs can increase the ERP vendor’s negotiating power, which can increase support, maintenance, and upgrade expenses.
  • Overcoming resistance to sharing sensitive information between departments can divert management attention.
  • Integration of truly independent businesses can create unnecessary dependencies.
  • Extensive training requirements take resources from daily operations.
  • Harmonization of ERP systems can be a mammoth task and requires a lot of time, planning, and money.

Attacks on Smart Cards

By Author – Samata Shelare

 

When hit by an APT attack, many companies implement smart cards and/or other two-factor authentication mechanisms as a reactionary measure. But thinking that these solutions will prevent credential theft is a big mistake. Attackers can bypass these protection mechanisms with clever techniques.

Nowadays, adversaries in the form of self-spreading malware or APT campaigns utilize Pass-the-Hash, a technique that allows them to escalate privileges in the domain. When Pass-the-Hash is not handy, they will use other techniques such as Pass-the-Ticket or Kerberoasting.

What makes smart cards so special?

A smart card is a piece of specialized cryptographic hardware that contains its own CPU, memory, and operating system. Smart cards are especially good at protecting cryptographic secrets, like private keys and digital certificates.

Smart cards may look like credit cards without the stripe, but they’re far more secure. They store their secrets until the right interfacing software accesses them in a predetermined manner, and the correct second factor PIN is provided. Smart cards often hold users’ personal digital certificates, which prove a user’s identity to an authentication requestor. Even better, smart cards rarely hand over the user’s private key. Instead, they provide the requesting authenticator “proof” that they have the correct private key.

After a company is subjected to a pass-the-hash attack, it often responds by jettisoning weak or easy password hashes. On many occasions, smart cards are the recommended solution, and everyone jumps on board. Because digital certificates aren’t hashes, most people think they’ve found the answer.

In this experiment, we will perform the four most common credential theft attacks on a domain-connected PC with both smart card and 2FA enabled.

  1. Clear text password theft
  2. Pass the hash attack
  3. Pass the ticket attack
  4. Process token manipulation attack
  5. Pass the Smart Card Hash
  6. A smart card is a piece of specialized cryptographic hardware that contains its CPU, memory, and operating system.

When authenticating a user with a smart card and PIN (Personal Identification Number) code in an Active Directory network (which is 90% of all networks), the Domain Controller returns an NTLM hash. The hash is calculated based on a randomly selected string. Presenting this hash to the DC identifies you as that user.

This hash can be reused and replayed without the need for the smart card. It is stored in the LSASS process inside the endpoint memory, and its easily readable by an adversary who has managed to compromise the endpoint using tools like Mimikatz, WCE, or even just dumping the memory of the LSASS process using the Task Manager. This hash exists in the memory because its crucial for single sign-on (SSO) support.

This is how smart card login works:

  • The user inserts his smart card and enters his own PIN in a login window.
  • The smart card subsystem authenticates the user as the owner of the smart card and retrieves the certificate from the card.
  • The smart card client sends the certificate to the KDC (Kerberos Key Distribution Center) on the DC.
  • The KDC verifies the Smart Card Login Certificate, retrieves the associated user of this certificate, and builds a Kerberos TGT for that user.
  • The KDC returns encrypted TGT back to the client.
  • The smart card decrypts the TGT and retrieves the NTLM hash from the negotiation.
  • Presenting only the TGT or the NTLM hash from now on will get you authenticated.

During standard login, the NTLM hash is calculated using the users password. Because the smart card doesnt contain a password, the hash is only calculated when you set the attribute to smart card required for interactive login. GPO can force users to change their passwords periodically. This feature exposes huge persistency security risk. Once the smart card users computer is compromised, an attacker can grab the hash generated from the smart card authentication. Now he has a hash with unlimited lifetime?and worse, lifetime persistency on your domain, because the hash will never change as long as Smart Card Logon, is forced on that user.

However, Microsoft offers a solution for the smart card persistence problem: they will rotate the hashes of your smart card account every 60 days. But it is only applicable if your Domain functionality level is Windows Server 2016.

Smart cards cant protect against Pass-the-Hash and their hash ever changes.

Pass-The-2FA Hash

During authentication with some third-party 2FA, the hash is calculated from the users managed password. And because the password is managed, it is changed frequently and sometimes even immediately.

In some cases, 2FA managed to mitigate Pass-the-Hash attempts because the hash was calculated using the OTP (one-time password). Therefore, the hash wont be valid anymore, and the adversary who stole it wont be able to authenticate with it.

Other vendors like AuthLite mitigated Pass-The-Hash attempts because the cached hash of 2F sessions is manipulated by AuthLite, so stealing the hash in the memory is useless. Theres still additional verification in the DC, and the OTP must be forwarded to AuthLite before authenticating as a 2FA token.

Depending on the 2FA solution you have, you probably wont be able to Pass-the-Hash.

With their embedded microchip technology and the secure authentication they can provide, smart cards or hardware tokens have been relied upon to give physical access and the go-ahead for data transfer in a multitude of applications and transactions, in the public, corporate, and government sectors.

But, robust as they are, smart cards do have weaknesses ? and intelligent hackers have developed a variety of techniques for observing and blocking their operations, so as to gain access to credentials, information, and funds. In this article, we will take a closer look at the technology and how it is being used for Smart Card Attacks.

Smart Communications

Small information packets called Application Protocol Data Units (APDUs) are the basis of communication between a Card Accepting Device (CAD) and a smart card ? which may take the form of a standard credit card-sized unit, the SIM card for a smartphone, or a USB dongle.

Data travels between the smart card and CAD in one direction at a time, and both objects use an authentication protocol to identify each other. A random number generated by the card is sent to the CAD, which uses a shared encryption key to scramble the digits before sending the number back. The card compares this returned figure with its own encryption, and a reverse process occurs as the communication exchange continues.

Each message between the two is authenticated by a special code ? a figure based on the message content, a random number, and an encryption key. Symmetric DES (Data Encryption Standard), 3DES (triple DES) and public key RSA (Rivest-Shamir-Adleman algorithm) are the encryption methods most commonly used.

Secure, generally ? but hackers using brute force methods are capable of breaking each of these encryptions down, with enough time and sufficiently powerful hardware.

OS-Level Protection

Smart card operating systems organize their data into a three-level hierarchy. At the top, the root or Master File (MF) may hold several Dedicated Files (DFs: analogous to directories or folders) and Elementary Files (EFs: like regular files on a computer). But DFs can also hold files, and all three levels use headers which spell out their security attributes and user privileges. Applications may only move to a position on the OS if they have the relevant access rights.

Personal Identification Numbers (PINs) are associated with the Cardholder verification 1 (CHV1) and Cardholder verification 2 (CHV2) levels of access, which correspond to the user PIN allocated to a cardholder and the unblocking code needed to re-enable a compromised card.

The operating system blocks a card after an incorrect PIN is entered a certain number of times. This condition applies to both the user PIN and the unblocking code ? and while it provides a measure of security against fraud for the cardholder, it also provides malicious intruders with an opportunity for sabotage and locking a user out of their own accounts.

Host-Based Security

Systems and networks using host-based security deploy smart cards as simple carrier of information. Data on the cards may be encrypted, but the protection of it is the responsibility of the host system ? and information may be vulnerable as its being transmitted between card and computer.

Employing smart memory cards with a password mechanism that prevents unauthorized reading offers some additional protection, but passwords can still become accessible to hackers as unencrypted transmissions move between the card and the host.

Card-Based Security

Systems with the card or token-based security treat smart cards with microprocessors as independent computing devices. During interactions between cards and the host system, user identities can be authenticated and a staged protocol put in place to ensure that each card has clearance to access the system.

Access to data on the card is controlled by its own operating system, and any pre-configured permissions set by the organization issuing the card. So for hackers, the target for infiltration or sabotage becomes the smart card itself, or some breach of the issuing body which may affect the condition of cards issued in the future.

Physical Vulnerabilities

For hackers, gaining physical access to the embedded microchip on a smart card is a comparatively straightforward process.

Physical tampering is an invasive technique that begins with removing the chip from the surface of the plastic card. Its a simple enough matter of cutting away the plastic behind the chip module with a sharp knife until the epoxy resin binding it to the card becomes visible. This resin can then be dissolved with a few drops of fuming nitric acid, shaking the card in acetone until the resin and acid are washed away.

The attacker may then use an optical microscope with camera attachment to take a series of high-resolution shots of the microchip surface. Analysis of these photos can reveal the patterns of metal lines tracing the cards data and memory bus pathways. Their goal will be to identify those lines that need to be reproduced in order to gain access to the memory values controlling the specific data sets they are looking for.

 

Digital India: A Planning towards future or Enforcement of Technology

A lot has been done till date by the government of India with the Digital India plan, the idea behind this plan was definitely to just improve ourselves as to digital world and to adopt the changes as the world is changing so not to lack behind. It is mentioned in the website of digital India, that Digital India has a vision transform India into a digitally empowered society and knowledge economy. But my main concern is after the two years of the program being launched I dont see much of the people which is other than the young generation is not at all ready to run parallel to the rest India.

Digital India to them is like thrusting a piece of cake to their neck, it is good, it is healthy and even delicious but still not adaptive to the ones who dont want to eat. This is felt by most of the people in this country.

The main problem is that nobody knows what to do, how to do, and after some struggle if feels uncomfortable to them to do. The initiatives on infrastructure, services, and empowerment are really appreciable yet not reachable by most of the audience. What is needed is the consultation, which is also provided but not in the well-guided manner, which eventually makes out of no fruit.

The plans under digital India, Startup India, and Skill India also impacting great but still the thrust of hammer not enough to bend the metal, means a lot of promotion and consultation is needed to do reach out to people, ideas are endless by people for rural development and strong infrastructure and economic growth proper monitoring is needed, as a tree needed the most care when was a plant.

Digital India plan is definitely a boon for all individuals if they can utilize the opportunity, a complete description of approach and methodology for digital India program which looks very active words when you see in website but what is been acted, very little as per my knowledge.

 

Well, I am not here to just talk about everything that is dark going on in this world, many things goes positive and actually has changed after the Digital India plan. Plans which actually made an impact to the governance under the digital india are can be mentions as

  • High-speed connectivity and high-speed internet at most remote and inaccessible areas to grow the communication and connect India to the world and newer ideas. Its a National Rural Internet Mission.
  • E-Governance?Improving governance using technology. This is to improve the government to citizen interface for various service deliveries.
  • E-Kranti ?Deliver services electronically and thus in a faster and time-bound manner. This is helpful in education, healthcare, planning, security, financial inclusion, justice, farmers, etc.
  • Information for all –This will bring in transparency and accountability by easy and open access to documents and information to the citizens.
  • Electronics manufacturing ?This will encourage manufacturing of electronics in India and reduce electronics import and help in job creation too. This will help in achieving goals of Make in India initiative also.
  • Cyber Security ? Government giving now focus on the security part of the data that usually leaks and the data can be used in important hands
  • IT for jobs ?Skill India mission under the Digital India mission helping students to learn the practical and industrial level experience to enhance their performance.

After seeing all this points I think I have increased your dilemma that Digital India plan/campaign is really doing any good or was and is doing great. Well my point in this is that we are doing good, but changes always happen when you do great and with a great pace, I am much of concerned about over very slow growing and learning speed. We need to implement everything fast but at the same time made it convenient for people to use else it wont to anyone till we act on it strongly and boldly on it.

C James Yen said beautifully once that, Technical know-how of the experts must be transformed into practical do-how of the people.

Artificial Eye

By Author – Rishabh Sontakke

 

An artificial eye is a replacement for a natural eye lost because of injury or disease. Although the replacement cannot provide sight, it fills the cavity of the eye socket and serves as a cosmetic enhancement. Before the availability of artificial eyes, a person who lost an eye used to wore a patch. An artificial eye can be attached to muscles in the socket to provide eye movement.

Today, most artificial eyes are made of plastic, with an average life of about 10 years. Children require more frequent replacement of the Artificial Eye due to rapid growth changes. As many as four or five?Artificial Eye?may be required from babyhood to adulthood.

According to the Society for the Prevention of Blindness, between 10,000 and 12,000 people per year lose an eye. Though 50% or more of these eye losses are caused by an accident (in one survey more males lost their eyes to accidents compared to females), there are a number of genetic conditions that can cause eye loss or require an artificial eye. Microphthalmia is a birth defect where for some unknown reason the eye does not develop to its normal size. These eyes are totally blind, or at best might have some light sensitivity.

 

Society is an artificial construction, a defense against natures power

?

Some people are also born without one or both eyeballs. Called anophthalmia.

Retinoblastoma is a congenital (existing at birth) cancer or tumor, which is usually inherited. If a person has this condition in just one eye, the chances of passing it on are one in four or 25%.

There are two key steps in replacing a damaged or diseased eye.

–First, an?ophthalmologist?or eye surgeon must remove the natural eye. There are two types of operations.

  • The enucleation removes the eyeball by severing the muscles, which are connected to the?sclera?(white of eyeball).
  • The surgeon then cuts the optic nerve and removes the eye from the socket.

–Second, An implant is then placed into the socket to restore lost volume and to give the artificial eye some movement, and the wound is then closed.

Evisceration – In this operation, the surgeon makes an incision around the iris and then removes the contents of the eyeball. A ball made of some inert material such as plastic, glass, or silicone is then placed inside the eyeball, and the wound is closed.

Conformer ? Here the surgeon will place an (a plastic disc) into the socket. The conformer prevents shrinking of the socket and retains adequate pockets for the Artificial Eye. Conformers are made out of silicone or hard plastic. After the surgery, it takes the patient from four to six weeks to heal. The artificial eye is then made and fitted by a professional.

Raw Materials

Plastic is the main material that makes up the artificial eye. Wax and plaster of Paris are used to make the molds. A white powder called alginate is used in the molding process. Paints and other decorating materials are used to add life-like features to the prosthesis.

 

The eyes are the mirror of the soul

?

The Manufacturing?Process

The time to make an optic Artificial Eye from start to finish varies with each ocularist and the individual patient. A typical time is about 3.5 hours. Ocularists continue to look at ways to reduce this time.

There are two types of Artificial Eye.

–The very thin, shell type is fitted over a blind, disfigured eye or over an eye which has been just partially removed.

–The full modified impression type is made for those who have had eyeballs completely removed. The process described here is for the latter type.

  1. The ocularist inspects the condition of the socket.

 

  1. The ocularist paints the iris. An iris button (made from a plastic rod using a lathe) is selected to match the patient’s own iris diameter.

 

  1. Next, the ocularist hand carves a wax molding shell. This shell has an aluminum iris button embedded in it that duplicates the painted iris button. The wax shell is fitted into the patient’s socket so that it matches the irregular periphery of the socket.

 

  1. The impression is made using alginate, a white powder made from seaweed that is mixed with water to form a cream. After mixing, the cream is placed on the back side of the molding shell and the shell is inserted into the socket.

 

  1. The iris color is then rechecked and any necessary changes are made.

 

  1. A plaster-of-Paris cast is made of the mold of the patient’s eye socket. After the plaster has hardened (about seven minutes), the wax and alginate mold are removed and discarded.

 

  1. The plastic has hardened in the shape of the mold with the painted iris button embedded in the proper place.

 

  1. The prosthesis is then returned to the cast. Clear plastic is placed in the anterior half of the cast and the two halves are again joined, placed under pressure, and returned to the hot water. The Artificial Eye is finally ready for fitting.

 

The eyes tell more than the word could ever say

 

The Future ?

?

Improvements will continue in the optic Artificial Eye, which will benefit both patient and ocularist. Several developments have already occurred in recent years. Artificial Eye with two different size pupils which can be changed back and forth by the wearer was invented in the early 1980s. In the same period, a soft contact lens with a large black pupil was developed that simply lays on the corner of the artificial eye.

In 1989, a patented implant called the Bio-eye was released by the United States Food and Drug Administration. Today, over 25,000 people worldwide have benefited from this development, which is made from hydroxyapatite, a material converted from ocean coral and has both the porous structure and chemical structure of bone. In addition to natural eye movement, this type of implant has reduced migration and extrusion and prevents dropping of the lower lid by lending support to the artificial eye via a peg connection.

With advancements in computer, electronics, and biomedical engineering technology, it may someday be possible to have an artificial eye that can provide sight as well. Work is already in progress to achieve this goal, based on advanced microelectronics and sophisticated image recognition techniques.

Researchers at MIT and Harvard University are also developing what will be the first artificial retina. This is based on a?biochip?that is glued to the ganglion cells, which act as the eye’s data concentrators. The chip is composed of a tiny array of etched-metal electrodes on the retina side and a single sensor with integrated logic on the pupil side. The sensor responds to a small?infrared?laser?that shines onto it from a pair of glasses that would be worn by the artificial-retinal recipient.

Introduction to Java

By Author – Rashmita Soge

 

Java is a programming language created by James Gosling from Sun Microsystems (Sun) in 1991. The target of Java is to write a program once and then run this program on multiple operating systems. The first publicly available version of Java (Java 1.0) was released in 1995. Sun Microsystems was acquired by the Oracle Corporation in 2010. Oracle has now the steermanship for Java. In 2006 Sun started to make Java available under the GNU General Public License (GPL). Oracle continues this project called OpenJDK. Over time new enhanced versions of Java have been released. The current version of Java is Java 1.8 which is also known as Java 8.

Java is defined by a specification and consists of a programming language, a compiler, core libraries and a runtime (Java virtual machine) The Java runtime allows software developers to write program code in other languages than the Java programming language which still runs on the Java virtual machine. The Java platform is usually associated with the Java virtual machine and the Java core libraries.

What is java?

Java is a General Purpose, class-based, object-oriented, Platform independent, portable, Architecturally neutral, multithreaded, dynamic, distributed, Portable and robust interpreted Programming Language.

It is intended to let application developers “write once, run anywhere” meaning that compiled Java code can run on all platforms that support Java without the need for

History of Java

Java is the brainchild of Java pioneer James Gosling, who traces Javas core idea of, Write Once, Run Anywhere back to work he did in graduate school.

After spending time at IBM, Gosling joined Sun Microsystems in 1984. In 1991, Gosling partnered with Sun colleagues, Michael Sheridan and Patrick Naughton on Project Green, to develop new technology for programming next-generation smart appliances. Gosling, Naughton, and Sheridan set out to develop the project based on certain rules. They were specifically tied to performance, security, and functionality. Those rules were that Java must be:

  1. Secure and robust
  2. High performance
  3. Portable and architecture-neutral, which means it can run on any combination of software and hardware
  4. Threaded, interpreted, and dynamic
  5. Object-oriented

Over time, the team added features and refinements that extended the heirloom of C++ and C, resulting in a new language called Oak, named after a tree outside Goslings office.

After efforts to use Oak for interactive television failed to materialize, the technology was re-targeted for the world wide web. The team also began working on a web browser as a demonstration platform.

Because of a trademark conflict, Oak was renamed, Java, and in 1995, Java 1.0a2, along with the browser, name HotJava, was released. The Java language was designed with the following properties:

  • Platform independent: Java programs use the Java virtual machine as abstraction and do not access the operating system directly. This makes Java programs highly portable. A Java program (which is standard-compliant and follows certain rules) can run unmodified on all supported platforms, e.g., Windows or Linux.
  • Object-orientated programming language: Except the primitive data types, all elements in Java are objects.
  • Strongly-typed programming language: Java is strongly-typed, e.g., the types of the used variables must be pre-defined and conversion to other objects is relatively strict, e.g., must be done in most cases by the programmer.
  • Interpreted and compiled language: Java source code is transferred into the bytecode format which does not depend on the target platform. These bytecode instructions will be interpreted by the Java Virtual machine (JVM). The JVM contains a so-called Hotspot-Compiler which translates performance critical bytecode instructions into native code instructions.
  • Automatic memory management: Java manages the memory allocation and de-allocation for creating new objects. The program does not have direct access to the memory. The so-called garbage collector automatically deletes objects to which no active pointer exists.

How Java Works?

To understand the primary advantage of Java, you’ll have to learn about platforms. In most programming languages, a compiler generates code that can execute on a specific target machine. For example, if you compile a C++ program on a Windows machine, the executable file can be copied to any other machine but it will only run on other Windows machines but never another machine. A platform is determined by the target machine along with its operating system. For earlier languages, language designers needed to create a specialized version of the compiler for every platform. If you wrote a program that you wanted to make available on multiple platforms, you, as the programmer, would have to do quite a bit of additional work.? You would have to create multiple versions of your source code for each platform.

Java succeeded in eliminating the platform issue for high-level programmers because it has reorganized the compile-link-execute sequence at an underlying level of the compiler. Details are complicated but, essentially, the designers of the Java language isolated those programming issues which are dependent on the platform and developed low-level means to abstractly refer to these issues. Consequently, the Java compiler doesn’t create an object file, but instead it creates a bytecode file which is, essentially, an object file for a virtual machine.? In fact, the Java compiler is often called the JVM compiler. To summarize how Java works, think about the compile-link-execute cycle. In earlier programming languages, the cycle is more closely defined as “compile-link then execute”. In Java, the cycle is closer to “compile then link-execute”.

Future of Java

Java is not a legacy programming language, despite its long history. The robust use of Maven, the building tool for Java-based projects, debunks the theory that Java is outdated. Although there are a variety of deployment tools on the market, Apache Maven has by far been one of the largest automation tools developers use to deploy software applications.

With Oracles commitment to Java for the long haul, its not hard to see why Java will always be a part of programming languages for years to come and will remain as the chosen programming language. 2017 will see the release of the eighth version of Java-Java EE 8.

Despite its areas for improvement, and threat from rival programming languages like.NET, Java is here to stay. Oracle has plans for a new version release in the early part of 2017, with new supportive features that will strongly appeal to developers. Javas multitude of strengths as a programming language means its use in the digital world will only solidify. A language that was inherently designed for easy use has proved itself as functional and secure over the course of more than two decades. Developers who appreciate technological changes can also rest assured the tried-and-true language of Java will likely always have a significant place in their toolset.

Request a Free Estimate
Enter Your Information below and we will get back to you with an estimate within few hours
0