Business

Digital India: A Planning towards future or Enforcement of Technology

A lot has been done till date by the government of India with the Digital India plan, the idea behind this plan was definitely to just improve ourselves as to digital world and to adopt the changes as the world is changing so not to lack behind. It is mentioned in the website of digital India, that Digital India has a vision transform India into a digitally empowered society and knowledge economy. But my main concern is after the two years of the program being launched I dont see much of the people which is other than the young generation is not at all ready to run parallel to the rest India.

Digital India to them is like thrusting a piece of cake to their neck, it is good, it is healthy and even delicious but still not adaptive to the ones who dont want to eat. This is felt by most of the people in this country.

The main problem is that nobody knows what to do, how to do, and after some struggle if feels uncomfortable to them to do. The initiatives on infrastructure, services, and empowerment are really appreciable yet not reachable by most of the audience. What is needed is the consultation, which is also provided but not in the well-guided manner, which eventually makes out of no fruit.

The plans under digital India, Startup India, and Skill India also impacting great but still the thrust of hammer not enough to bend the metal, means a lot of promotion and consultation is needed to do reach out to people, ideas are endless by people for rural development and strong infrastructure and economic growth proper monitoring is needed, as a tree needed the most care when was a plant.

Digital India plan is definitely a boon for all individuals if they can utilize the opportunity, a complete description of approach and methodology for digital India program which looks very active words when you see in website but what is been acted, very little as per my knowledge.

 

Well, I am not here to just talk about everything that is dark going on in this world, many things goes positive and actually has changed after the Digital India plan. Plans which actually made an impact to the governance under the digital india are can be mentions as

  • High-speed connectivity and high-speed internet at most remote and inaccessible areas to grow the communication and connect India to the world and newer ideas. Its a National Rural Internet Mission.
  • E-Governance?Improving governance using technology. This is to improve the government to citizen interface for various service deliveries.
  • E-Kranti ?Deliver services electronically and thus in a faster and time-bound manner. This is helpful in education, healthcare, planning, security, financial inclusion, justice, farmers, etc.
  • Information for all –This will bring in transparency and accountability by easy and open access to documents and information to the citizens.
  • Electronics manufacturing ?This will encourage manufacturing of electronics in India and reduce electronics import and help in job creation too. This will help in achieving goals of Make in India initiative also.
  • Cyber Security ? Government giving now focus on the security part of the data that usually leaks and the data can be used in important hands
  • IT for jobs ?Skill India mission under the Digital India mission helping students to learn the practical and industrial level experience to enhance their performance.

After seeing all this points I think I have increased your dilemma that Digital India plan/campaign is really doing any good or was and is doing great. Well my point in this is that we are doing good, but changes always happen when you do great and with a great pace, I am much of concerned about over very slow growing and learning speed. We need to implement everything fast but at the same time made it convenient for people to use else it wont to anyone till we act on it strongly and boldly on it.

C James Yen said beautifully once that, Technical know-how of the experts must be transformed into practical do-how of the people.

Artificial Eye

By Author – Rishabh Sontakke

 

An artificial eye is a replacement for a natural eye lost because of injury or disease. Although the replacement cannot provide sight, it fills the cavity of the eye socket and serves as a cosmetic enhancement. Before the availability of artificial eyes, a person who lost an eye used to wore a patch. An artificial eye can be attached to muscles in the socket to provide eye movement.

Today, most artificial eyes are made of plastic, with an average life of about 10 years. Children require more frequent replacement of the Artificial Eye due to rapid growth changes. As many as four or five?Artificial Eye?may be required from babyhood to adulthood.

According to the Society for the Prevention of Blindness, between 10,000 and 12,000 people per year lose an eye. Though 50% or more of these eye losses are caused by an accident (in one survey more males lost their eyes to accidents compared to females), there are a number of genetic conditions that can cause eye loss or require an artificial eye. Microphthalmia is a birth defect where for some unknown reason the eye does not develop to its normal size. These eyes are totally blind, or at best might have some light sensitivity.

 

Society is an artificial construction, a defense against natures power

?

Some people are also born without one or both eyeballs. Called anophthalmia.

Retinoblastoma is a congenital (existing at birth) cancer or tumor, which is usually inherited. If a person has this condition in just one eye, the chances of passing it on are one in four or 25%.

There are two key steps in replacing a damaged or diseased eye.

–First, an?ophthalmologist?or eye surgeon must remove the natural eye. There are two types of operations.

  • The enucleation removes the eyeball by severing the muscles, which are connected to the?sclera?(white of eyeball).
  • The surgeon then cuts the optic nerve and removes the eye from the socket.

–Second, An implant is then placed into the socket to restore lost volume and to give the artificial eye some movement, and the wound is then closed.

Evisceration – In this operation, the surgeon makes an incision around the iris and then removes the contents of the eyeball. A ball made of some inert material such as plastic, glass, or silicone is then placed inside the eyeball, and the wound is closed.

Conformer ? Here the surgeon will place an (a plastic disc) into the socket. The conformer prevents shrinking of the socket and retains adequate pockets for the Artificial Eye. Conformers are made out of silicone or hard plastic. After the surgery, it takes the patient from four to six weeks to heal. The artificial eye is then made and fitted by a professional.

Raw Materials

Plastic is the main material that makes up the artificial eye. Wax and plaster of Paris are used to make the molds. A white powder called alginate is used in the molding process. Paints and other decorating materials are used to add life-like features to the prosthesis.

 

The eyes are the mirror of the soul

?

The Manufacturing?Process

The time to make an optic Artificial Eye from start to finish varies with each ocularist and the individual patient. A typical time is about 3.5 hours. Ocularists continue to look at ways to reduce this time.

There are two types of Artificial Eye.

–The very thin, shell type is fitted over a blind, disfigured eye or over an eye which has been just partially removed.

–The full modified impression type is made for those who have had eyeballs completely removed. The process described here is for the latter type.

  1. The ocularist inspects the condition of the socket.

 

  1. The ocularist paints the iris. An iris button (made from a plastic rod using a lathe) is selected to match the patient’s own iris diameter.

 

  1. Next, the ocularist hand carves a wax molding shell. This shell has an aluminum iris button embedded in it that duplicates the painted iris button. The wax shell is fitted into the patient’s socket so that it matches the irregular periphery of the socket.

 

  1. The impression is made using alginate, a white powder made from seaweed that is mixed with water to form a cream. After mixing, the cream is placed on the back side of the molding shell and the shell is inserted into the socket.

 

  1. The iris color is then rechecked and any necessary changes are made.

 

  1. A plaster-of-Paris cast is made of the mold of the patient’s eye socket. After the plaster has hardened (about seven minutes), the wax and alginate mold are removed and discarded.

 

  1. The plastic has hardened in the shape of the mold with the painted iris button embedded in the proper place.

 

  1. The prosthesis is then returned to the cast. Clear plastic is placed in the anterior half of the cast and the two halves are again joined, placed under pressure, and returned to the hot water. The Artificial Eye is finally ready for fitting.

 

The eyes tell more than the word could ever say

 

The Future ?

?

Improvements will continue in the optic Artificial Eye, which will benefit both patient and ocularist. Several developments have already occurred in recent years. Artificial Eye with two different size pupils which can be changed back and forth by the wearer was invented in the early 1980s. In the same period, a soft contact lens with a large black pupil was developed that simply lays on the corner of the artificial eye.

In 1989, a patented implant called the Bio-eye was released by the United States Food and Drug Administration. Today, over 25,000 people worldwide have benefited from this development, which is made from hydroxyapatite, a material converted from ocean coral and has both the porous structure and chemical structure of bone. In addition to natural eye movement, this type of implant has reduced migration and extrusion and prevents dropping of the lower lid by lending support to the artificial eye via a peg connection.

With advancements in computer, electronics, and biomedical engineering technology, it may someday be possible to have an artificial eye that can provide sight as well. Work is already in progress to achieve this goal, based on advanced microelectronics and sophisticated image recognition techniques.

Researchers at MIT and Harvard University are also developing what will be the first artificial retina. This is based on a?biochip?that is glued to the ganglion cells, which act as the eye’s data concentrators. The chip is composed of a tiny array of etched-metal electrodes on the retina side and a single sensor with integrated logic on the pupil side. The sensor responds to a small?infrared?laser?that shines onto it from a pair of glasses that would be worn by the artificial-retinal recipient.

Introduction to Java

By Author – Rashmita Soge

 

Java is a programming language created by James Gosling from Sun Microsystems (Sun) in 1991. The target of Java is to write a program once and then run this program on multiple operating systems. The first publicly available version of Java (Java 1.0) was released in 1995. Sun Microsystems was acquired by the Oracle Corporation in 2010. Oracle has now the steermanship for Java. In 2006 Sun started to make Java available under the GNU General Public License (GPL). Oracle continues this project called OpenJDK. Over time new enhanced versions of Java have been released. The current version of Java is Java 1.8 which is also known as Java 8.

Java is defined by a specification and consists of a programming language, a compiler, core libraries and a runtime (Java virtual machine) The Java runtime allows software developers to write program code in other languages than the Java programming language which still runs on the Java virtual machine. The Java platform is usually associated with the Java virtual machine and the Java core libraries.

What is java?

Java is a General Purpose, class-based, object-oriented, Platform independent, portable, Architecturally neutral, multithreaded, dynamic, distributed, Portable and robust interpreted Programming Language.

It is intended to let application developers “write once, run anywhere” meaning that compiled Java code can run on all platforms that support Java without the need for

History of Java

Java is the brainchild of Java pioneer James Gosling, who traces Javas core idea of, Write Once, Run Anywhere back to work he did in graduate school.

After spending time at IBM, Gosling joined Sun Microsystems in 1984. In 1991, Gosling partnered with Sun colleagues, Michael Sheridan and Patrick Naughton on Project Green, to develop new technology for programming next-generation smart appliances. Gosling, Naughton, and Sheridan set out to develop the project based on certain rules. They were specifically tied to performance, security, and functionality. Those rules were that Java must be:

  1. Secure and robust
  2. High performance
  3. Portable and architecture-neutral, which means it can run on any combination of software and hardware
  4. Threaded, interpreted, and dynamic
  5. Object-oriented

Over time, the team added features and refinements that extended the heirloom of C++ and C, resulting in a new language called Oak, named after a tree outside Goslings office.

After efforts to use Oak for interactive television failed to materialize, the technology was re-targeted for the world wide web. The team also began working on a web browser as a demonstration platform.

Because of a trademark conflict, Oak was renamed, Java, and in 1995, Java 1.0a2, along with the browser, name HotJava, was released. The Java language was designed with the following properties:

  • Platform independent: Java programs use the Java virtual machine as abstraction and do not access the operating system directly. This makes Java programs highly portable. A Java program (which is standard-compliant and follows certain rules) can run unmodified on all supported platforms, e.g., Windows or Linux.
  • Object-orientated programming language: Except the primitive data types, all elements in Java are objects.
  • Strongly-typed programming language: Java is strongly-typed, e.g., the types of the used variables must be pre-defined and conversion to other objects is relatively strict, e.g., must be done in most cases by the programmer.
  • Interpreted and compiled language: Java source code is transferred into the bytecode format which does not depend on the target platform. These bytecode instructions will be interpreted by the Java Virtual machine (JVM). The JVM contains a so-called Hotspot-Compiler which translates performance critical bytecode instructions into native code instructions.
  • Automatic memory management: Java manages the memory allocation and de-allocation for creating new objects. The program does not have direct access to the memory. The so-called garbage collector automatically deletes objects to which no active pointer exists.

How Java Works?

To understand the primary advantage of Java, you’ll have to learn about platforms. In most programming languages, a compiler generates code that can execute on a specific target machine. For example, if you compile a C++ program on a Windows machine, the executable file can be copied to any other machine but it will only run on other Windows machines but never another machine. A platform is determined by the target machine along with its operating system. For earlier languages, language designers needed to create a specialized version of the compiler for every platform. If you wrote a program that you wanted to make available on multiple platforms, you, as the programmer, would have to do quite a bit of additional work.? You would have to create multiple versions of your source code for each platform.

Java succeeded in eliminating the platform issue for high-level programmers because it has reorganized the compile-link-execute sequence at an underlying level of the compiler. Details are complicated but, essentially, the designers of the Java language isolated those programming issues which are dependent on the platform and developed low-level means to abstractly refer to these issues. Consequently, the Java compiler doesn’t create an object file, but instead it creates a bytecode file which is, essentially, an object file for a virtual machine.? In fact, the Java compiler is often called the JVM compiler. To summarize how Java works, think about the compile-link-execute cycle. In earlier programming languages, the cycle is more closely defined as “compile-link then execute”. In Java, the cycle is closer to “compile then link-execute”.

Future of Java

Java is not a legacy programming language, despite its long history. The robust use of Maven, the building tool for Java-based projects, debunks the theory that Java is outdated. Although there are a variety of deployment tools on the market, Apache Maven has by far been one of the largest automation tools developers use to deploy software applications.

With Oracles commitment to Java for the long haul, its not hard to see why Java will always be a part of programming languages for years to come and will remain as the chosen programming language. 2017 will see the release of the eighth version of Java-Java EE 8.

Despite its areas for improvement, and threat from rival programming languages like.NET, Java is here to stay. Oracle has plans for a new version release in the early part of 2017, with new supportive features that will strongly appeal to developers. Javas multitude of strengths as a programming language means its use in the digital world will only solidify. A language that was inherently designed for easy use has proved itself as functional and secure over the course of more than two decades. Developers who appreciate technological changes can also rest assured the tried-and-true language of Java will likely always have a significant place in their toolset.

GPS aircraft tracking

By Author – Samata Shelare

 

GPS aircraft tracking is used in both commercial and personal aircraft, and it comes along with a variety of benefits both to safety and convenience. What a GPS does on an aircraft in terms of tracking is a lot different than what a GPS may do in your car. GPS tracking can help to ensure your position in the sky and keep you safe while going about a day of flying.
In order to understand the benefits of GPS aircraft tracking, one will first need to understand just how it works. A device with a GPS sensor is fixed into the aircraft, and it is able to transmit real-time GPS positions of any plane to a server board located on the ground. This sensor may be placed in a number of different areas or positions on the plane depending on the specific make and model, but all sensors work similarly in tracking a planes current position at any time. These positions can then be picked up on by air traffic controllers on the ground that will be able to locate airplanes of all sizes and at all elevations, within any given area and at any given time.
GPS aircraft tracking can provide a number of benefits, even outside of the obvious benefits involving safety. The use of this type of technology can help to calculate flight times to and from any number of destinations so that pilots can get a better understanding of their time of departure compared to the time of arrival, and it can also support in the finding of an aircraft in the instance of an accident. Additionally, GPS aircraft tracking can even be used in flight schools to allow pilots in training to follow a certain path or flight plan laid out by an instructor.
There are actually about 100 air traffic facilities already using the ADS-B, which is why they are able to give such a firm estimation of 2020. This is nearly half of the 230 air traffic facilities in the world. Aviation experts believe 2020 is a good estimate for when every one of these facilities will be using the technology, with more and more adding the technology over the next 16 years. The hardest part is simply equipping the planes with the new system.
Tracking planes during their flight isnt the only thing the ADS-B GPS tracking system can do. It also has the ability to provide weather and other pertinent information to pilots in real-time, so they have an as advanced warning as possible about the current environmental conditions that might impact their flying decisions.
One of the big issues in the past with the other 130 air traffic facilities is that it is easy to lose radar in certain areas of the world. As is most likely the case with the recent missing Malaysian Airlines plane, it was likely over water or in another location not easily tracked by the ground-based radar. This makes it difficult to track and almost impossible to find out what happened to it.
Feith also mentioned that some flights require the new GPS tracking technology because of flying over the Atlantic or the Pacific Ocean and being more at risk for becoming lost during their flight.
GPS aircraft tracking is quite a bit different from the GPS technology we may use during our everyday lives in a car, but it provides the same amount of benefits when it comes to convenience, safety, and ease of navigation.

GEAR DE-BURRING MACHINE

Gear deburring is a process that has changed substantially over the past 10 years. There have been advancements in the types of tools used for deburring operations and the development of “wet” machines, automatic load and unload, automatic part transfer and turnover, and vision systems for part identification, etc.

Three types of tools are used in the gear deburring process, including grinding wheels, brushes, and carbide tools. A discussion of each method is as follows.

Grinding Wheels
There are many wheel grits available, from 320 grit for small burrs and light chamfers, to 57 grit for large burrs and heavy chamfers, with numerous grit sizes in between. Grinding wheels will usually provide the required cosmetic appearance for a deburred gear. Setting up the grinding wheel is critical for good wheel life and consistent chamfers. The point of contact for the grinding wheel should be equal to the approach angle of the grinding head. For example, set a 45 approach angle for the grinding head with a protractor. Next, draw a line through the center of the grinding wheel followed by a line drawn 45 to the first line. The contact point between the gear and the grinding wheel should be at the 45 line.

The size of the chamfer attainable is determined by the size of the burr to be removed from the part. Further, three additional factors that affect chamfer size are wheel grit size, the speed of the work spindle, and the amount of pressure applied to the part by the grinding wheel. Grinding wheel speed is noted on the grinding wheel, and it is usually 15,000 to 18,000 RPM. The grinding wheels used most often are aluminum oxide.

Brushes
Parts with small burrs can be effectively deburred with a brush. Two types of brushes are used for deburring operations, those being wire and nylon. Wire brushes are made with straight, crimped, or knotted bristles. The wire diameter and length will determine how aggressively the brush will deburr. Nylon brushes can be impregnated with either aluminum oxide or silicon carbide, with grit size ranging from 80 to 400. The specific application will determine which type of brush is required. In applications where a heavy burr is to be removed with a grinding wheel or carbide tool, a brush is often used as a secondary process for removing small burrs created by the first process.
Carbide Tools
The use of carbide deburring tools is a relatively new development. There are three advantages to using carbide tools:
? Reduced deburring time. The carbide tools can run at 40,000 RPM, vs. 15,000 to 18,000 RPM for grinding wheels.

? Reduced setup time, because there is no need to establish an approach angle as with a grinding wheel.

? Ability to deburr cluster gears, or gears having the root of the tooth close to the gear shaft or hub.
Deburring Machine Features
The deburring process is accomplished with floating-style deburring heads having variable RPM air motors or turbines. The floating heads have air-operated, adjustable counterweights for adjusting the pressure applied to the part being deburred.
The floating heads can use grinding wheels, brushes, or carbide tools, and change-over from one to the other can be accomplished in a matter of minutes, providing versatility for doing a number of different parts on one machine.
ADVANTAGES:
1. Quick action clamping.
2. Precise indexing.
3. Multi-module indexer makes all range of spur gear de-burring possible
4. Fast action de-burring due to the sequential operation of the grinding head and indexer mechanism
5. Low-cost automation.
6. The flexibility of circuit design / can be converted into the fully automatic mode with minimal circuit components.
7. Low-cost automation process
8. Saves labor cost and monotony of operation.

APPLICATIONS:
1. Machine tool manufacturing industry.
2. Agriculture machinery manufacturing.
3. Molded gear industry.
4. Timer pulley manufacturing.
5. Sprocket and chain wheel manufacturing ..etc.

4G Wi-Fi Revolution

Wi-Fi is an extremely powerful resource that connects people, business, and increasingly the Internet of Things. It is used in our homes, colleges, businesses, favorite cafes, buses, and many of our public spaces. However, it is also a hugely complex technology. Designing, deploying, and maintaining a successful WLAN is no easy task, the goal is to make that task easier for WLAN administrators of all skill levels through education, knowledge-sharing, and community participation etc.
Any malls, restaurants, hotel, and any other service station, Wi-Fi seems to be active. While supplemental downlink channels are 20MHZ, each the Wi-Fi channels could be 20MHz, 40MHz, 80MHz or even 160MHz. On many moments I had to switch off my Wi-Fi as the speed so poor & and go back to using 4G.
On my smartphone, most days I get 30/40mbps download speed and it works perfectly superb for all my needs. The only one reason that we would need higher speeds is to do a chain and use the laptop for work, watching a video, play games, listen to music, download anything that you want. Most of the people I know that they work with don’t require gigabit speed at the moment.
Once a user that is receiving high-speed data on their device using LTE-U / LAA creates a Wi-Fi hotspot, it may use the same 5GHZ channels as the once that the network is using for supplemental downlink. The user always asking why their download speed fall as soon as they switch WI-FI on.
The fact is that in a rural area & even general built-up areas, operates do not have to worry about the network being overloaded and use their licensed range. nobody is planning to place LTE-U / LAA in these areas. In the dense area and ultra areas, there are many more users, and many more wi-fi access points, ad-hoc wi-fi networks and many other sources of involvement.

Smart Home Technology

Smart-Home Technology benefits the home-owners to monitor their Houses remotely, countering dangers such as a forgotten coffee maker left on or a front door left unlocked.

Smart homes are also beneficial for the elderly, providing monitoring that can help seniors to remain at home comfortably and safely, rather than moving to a nursing home or requiring 24/7 home care.

Unsurprisingly, smart homes can accommodate user preferences. For example, as soon as you arrive home, your garage door will open, the lights will go on, the fireplace will roar and your favorite tunes will start playing on your smart speakers.

 

Home automation also helps consumers improve efficiency. Instead of leaving the air conditioning on all day, a smart home system can learn your behaviors and make sure the house is cooled down by the time you arrive home from work. The same goes for appliances. And with a smart irrigation system, your lawn will only be watered when needed and with the exact amount of water necessary. With home automation, energy, water and other resources are used more efficiently, which helps save both natural resources and money for the consumer.

However, home automation systems have struggled to become mainstream, in part due to their technical nature. A drawback of smart homes is their perceived complexity; some people have difficulty with technology or will give up on it with the first annoyance. Smart home manufacturers and alliances are working on reducing complexity and improving the user experience to make it enjoyable and beneficial for users of all types and technical levels.

For home automation systems to be truly effective, devices must be inter-operable regardless of who manufactured them, using the same protocol or, at least, complementary ones. As it is such a nascent market, there is no gold standard for home automation yet. However, standard alliances are partnering with manufacturers and protocols to ensure inter-operability and a seamless user experience.

Intelligence is the ability to adapt to change.”

Stephan Hawking

 

How smart homes work/smart home implementation

Newly built homes are often constructed with smart home infrastructure in place. Older homes, on the other hand, can be retrofitted with smart technologies. While many smart home systems still run on X10 or Insteon, Bluetooth and Wi-Fi have grown in popularity.

Zigbee and Z-Wave are two of the most common home automation communications protocols in use today. Both mesh network technologies, they use short-range, low-power radio signals to connect smart home systems. Though both target the same smart home applications, Z-Wave has a range of 30 meters to Zigbee’s 10 meters, with Zigbee often perceived as the more complex of the two. Zigbee chips are available from multiple companies, while Z-Wave chips are only available from Sigma Designs.

A smart home is not disparate smart devices and appliances, but ones that work together to create a remotely controllable network. All devices are controlled by a master home automation controller, often called a smart home hub. The smart home hub is a hardware device that acts as the central point of the smart home system and is able to sense, process data and communicate wirelessly. It combines all of the disparate apps into a single smart home app that can be controlled remotely by homeowners. Examples of smart home hubs include Amazon Echo, Google Home, Insteon Hub Pro, Samsung SmartThings and Wink Hub, among others.

Some smart home systems can be created from scratch, for example, using a Raspberry Pi or other prototyping board. Others can be purchased as a bundled?smart home kit also known as a smart home platform that contains the pieces needed to start a home automation project.

In simple smart home scenarios, events can be timed or triggered. Timed events are based on a clock, for example, lowering the blinds at 6:00 p.m., while triggered events depend on actions in the automated system; for example, when the owner’s smartphone approaches the door, the smart lock unlocks and the smart lights go on.

It involves the control and automation of lighting, heating (such as smart thermostats), ventilation, air conditioning (HVAC), and security (such as smart locks), as well as home appliances such as washer/dryers, ovens or refrigerators/freezers.WiFi is often used for remote monitoring and control. Home devices, when remotely monitored and controlled via the Internet, are an important constituent of the Internet of Things. Modern systems generally consist of switches and sensors connected to a central hub sometimes called a “gateway” from which the system is controlled with a user interface that is interacted either with a wall-mounted terminal, mobile phone software,tablet computer or a web interface, often but not always via Internet cloud services.

While there are many competing vendors, there are very few worldwide accepted industry standards and the smart home space is heavily fragmented. Manufacturers often prevent independent implementations by withholding documentation and by litigation.

 

Eye Ring

EyeRing is a wearable interface that allows using a pointing gesture or touching to access digital information about objects and the world. The idea of a micro camera worn as a ring on the index finger started as an experimental assistive technology for visually impaired persons, however soon enough we realized the potential for assistive interaction throughout the usability spectrum to children and visually-able adults as well.With a button on the side, which can be pushed with the thumb, the ring takes a picture or a video that is sent wirelessly to a mobile.

A computation element embodied as a mobile phone is in turn accompanied by the earpiece for information loopback. The finger-worn device is autonomous and wireless. A single button initiates the interaction. Information transferred to the phone is processed, and the results are transmitted to the headset for the user to hear.

Several videos about EyeRing have been made, one of which shows a visually impaired person making his way in a retail clothing environment where he is touching t-shirts on a rack, as he is trying to find his preferred color and size and he is trying to learn the price. He uses his EyeRing finger to point to a shirt to hear that it is color gray and he points to the pricetag to find out how much the shirt costs.

The researchers note that a user needs to pair the finger-worn device with the mobile phone application only once. Henceforth a Bluetooth connection will be automatically established when both are running.

The Android application on the mobile phone analyzes the image using the teams computer vision engine. The type of analysis and response depends on the pre-set mode, for example, color, distance, or currency. Upon analyzing the image data, the Android application uses a Text to Speech module to read out the information though a headset, according to the researchers.

The MIT group behind EyeRing are Suranga Nanayakkara, visiting faculty in the Fluid Interfaces group at MIT Media Lab and also a professor at Singapore University of Technology and Design; Roy Shilkrot, a first year doctoral student in the group; and Patricia Maes, associate professor and founder of the Media Labs Fluid Interfaces group.

The EyeRing in concept is promising but the team expects the prototype to evolve with more iterations to come. They are now at the stage where they want to prove it is a viable solution yet seek to make it better. The EyeRing creators say that their work is still very much a work in progress. The current implementation uses a TTL Serial JPEG Camera, 16 MHz AVR processor, Bluetooth module, 3.7V polymer Lithium-ion battery, 3.3V regulator, and a push button switch. They also look forward to a device that can carry advanced capabilities such as real-time video feed from the camera, higher computational power, and additional sensors like gyroscopes and a microphone. These capabilities are in development for the next prototype of EyeRing.

A Finger-worn Assistant The desire to replace an impaired human visual sense or augment a healthy one had a strong influence on the design and rationale behind EyeRing. To that end, we propose a system composed of a finger-worn device with an embedded camera, a computing element embodied as a mobile phone, and an earpiece for audio feedback. The finger-worn device is autonomous and wireless, and includes a single button to initiate the interaction. Information from the device is transferred to the computation element where it is processed, and the results are transmitted to the headset for the user to hear. Typically, a user would single click the pushbutton switch on the side of the ring using his thumb. At that moment, a snapshot is taken from the camera and the image is transferred via Bluetooth to the mobile phone. An Android application on the mobile phone then analyzes the image using our computer vision engine. Upon analyzing the image data, the Android application uses a Text-to-Speech module to read out the information though a hands-free head set. Users could change the preset mode by double-clicking the pushbutton and giving the system a brief verbal commands such as distance, color, currency, etc

Big Data

By Author – Shubhangi Agarwal

 

Big data is non-traditional strategy and technology used to organize, process, and gather insights from large datasets. While the problem of working with data that exceeds the computing power or storage of a single computer is not new, the pervasiveness, scale, and value of this type of computing have greatly expanded in recent years.

In this article, we will talk about big data on a fundamental level and define common concepts you might come across while researching the subject. We will also take a high-level look at some of the processes and technologies currently being used in this space.

What Is Big Data?
An exact definition of “big data” is difficult to nail down because projects, vendors, practitioners, and business professionals use it quite differently. With that in mind, generally speaking, big data is:

Large Datasets: The category of computing strategies and technologies that are used to handle large datasets
In this context, “large dataset” means a dataset too large to reasonably process or store with traditional tooling or on a single computer. This means that the common scale of big datasets is constantly shifting and may vary significantly from organization to organization.

Why Are Big Data Systems Different?
The basic requirements for working with big data are the same as the requirements for working with datasets of any size. However, the massive scale, the speed of ingesting and processing, and the characteristics of the data that must be dealt with at each stage of the process present significant new challenges when designing solutions. The goal of most big data systems is to surface insights and connections from large volumes of heterogeneous data that would not be possible using conventional methods.

 

Big Data Analytics:

Big Data Analytics is one of the great new frontiers of IT. Data is exploding so fast and the promise of deeper insights is so compelling that IT managers are highly motivated to turn big data into an asset they can manage and exploit for their organizations. Emerging technologies such as the Hadoop framework and MapReduce offer new and exciting ways to process and transform big data – defined as complex, unstructured, or large amounts of data – into meaningful insights, but also require IT to deploy infrastructure differently to support the distributed processing requirements and real-time demands of big data analytics. Big data is data sets that are so voluminous and complex that traditional data processing application software is inadequate to deal with them. Big data challenges include capturing data, data storage, data analysis, search, sharing, transfer, visualization, querying, updating, information privacy and data source. There are five dimensions to big data known as Volume, Variety, Velocity and the recently added Veracity and Value.

Lately, the term “Big Data” tends to refer to the use of predictive analytics, user behavior analytics, or certain other advanced data analysis methods that extract value from data, and seldom to a particular size of data set. “There is little doubt that the quantities of data now available are indeed large, but thats not the most relevant characteristic of this new data ecosystem.” Analysis of data sets can find new correlations to “spot business trends, prevent diseases, combat crime and so on “Scientists, business executives, practitioners of medicine, advertising and governments alike regularly meet difficulties with large data-sets in areas including Internet search, fintech, urban informatics, and business informatics. Scientists encounter limitations in e-Science work, including meteorology, genomics, connectomics, complex physics simulations, biology and environmental research. You can take data from any source and analyze it to find answers that enable

  1. Cost Reductions
  2. Time reductions
  3. New product development and optimized offerings
  4. Smart decision making

The importance of big data doesnt revolve around how much data you have, but what you do with it.

Jain Software also provides projects based on Big Data. You can directly contact to Jain Software by calling on +91-771-4700-300 or you can also Email us on Global@Jain.software.

5G Wireless Systems

5G technology is going to be a new mobile revolution in technological market. Through 5G technology now you can use worldwide cellular phones. With the coming out of cell phone alike to PDA now your whole office is in your finger tips or in your phone. 5G technology has extraordinary data capabilities and has ability to tie together unrestricted call volumes and infinite data broadcast within latest mobile operating system. 5G technology has a bright future because it can handle best technologies and offer priceless handset to their customers. May be in coming days 5G technology takes over the world market.

5G Technologies have an extraordinary capability to support Software and Consultancy. The Router and switch technology used in 5G network provides high connectivity. The 5G technology distributes internet access to nodes within the building and can be deployed with union of wired or wireless network connections. The current trend of 5G technology has a glowing future.

The 5G terminals will have software defined radios and modulation schemes as well as new error-control schemes that can be downloaded from the Internet. The development is seen towards the user terminals as a focus of the 5G mobile networks. The terminals will have access to different wireless technologies at the same time and the terminal should be able to combine different flows from different technologies. The vertical handovers should be avoided, because they are not feasible in a case when there are many technologies and many operators and service providers. In 5G, each network will be responsible for handling user-mobility, while the terminal will make the final choice among different wireless/mobile access network providers for a given service. Such choice will be based on open intelligent middleware in the mobile phone.

 

While 5G isn’t expected until 2020, an increasing number of companies are investing now to prepare for the new mobile wireless standard. We explore 5G, how it works and its impact on future wireless systems.

 

According to the Next Generation Mobile Network’s 5G white paper, 5G connections must be based on ‘user experience, system performance, enhanced services, business models and management & operations’.

 

And according to the Group Special Mobile Association (GSMA) to qualify for a 5G a connection should meet most of these eight criteria:

  1. One to 10Gbps connections to end points in the field
  2. One millisecond end-to-end round trip delay
  3. 1000x bandwidth per unit area
  4. 10 to 100x number of connected devices
  5. (Perception of) 99.999 percent availability
  6. (Perception of) 100 percent coverage
  7. 90 percent reduction in network energy usage
  8. Up to ten-year battery life for low power, machine-type devices

Previous generations like 3G were a breakthrough in communications. 3G receives a signal from the nearest phone tower and is used for phone calls, messaging and data.

4G works the same as 3G but with a faster internet connection and a lower latency (the time between cause and effect).

 

Like all the previous Generations,5G will be significantly faster than its predecessor 4G.

This should allow for higher productivity across all capable devices with a theoretical download speed of 10,000 Mbps.

“Current 4G mobile standards have the potential to provide 100s of Mbps. 5G offers to take that into multi-gigabits per second, giving rise to the Gigabit Smartphone and hopefully a slew of innovative services and applications that truly need the type of connectivity that only 5G can offer,” says Paul Gainham, senior director, SP Marketing EMEA at Juniper Networks.

Plus, with greater bandwidth comes faster download speeds and the ability to run more complex mobile internet apps.

 

The future of 5G

As 5G is still in development, it is not yet open for use by anyone. However, lots of companies have started creating 5G products and field testing them.

Notable advancements in 5G technologies have come from Nokia, Qualcomm, Samsung, Ericsson and BT, with growing numbers of companies forming 5G partnerships and pledging money to continue to research into 5G and its application.

Qualcomm and Samsung have focused their 5G efforts on hardware, with Qualcomm creating a 5G modem and Samsung producing a 5G enabled home router.

Both Nokia and Ericcson have created 5G platforms aimed at mobile carriers rather than consumers.Ericsson created the first 5G platform earlier this year that claims to provide the first 5G radio system. Ericsson began 5G testing in 2015.

Who is investing in 5G?

 

Both Nokia and Ericcson have created 5G platforms aimed at mobile carriers rather than consumers.Ericsson created the first 5G platform earlier last year that claims to provide the first 5G radio system, although it has begun 5G testing in 2015.

Similarly, in early 2017, Nokia launched “5G First”, a platform aiming to provide end-to-end 5G support for mobile carriers.

Looking closer to home, the City of London turned on its district-wide public Wi-Fi network in October 2017, consisting of 400 small cell transmitters. The City plans to run 5G trials on it.

Chancellor Philip Hammond revealed in the Budget 2017 that the government will pledge 16 million to create a 5G hub. However, given the rollout of 4G, it’s unknown what rate 5G will advance at.

Request a Free Estimate
Enter Your Information below and we will get back to you with an estimate within few hours
0