Business

Enterprise Resource Planning

By Author – Prankul Sinha

 

Introduction:

  • ERP is usually referred to as a category of business-management software that an organization can use to collect, store, manage and interpret data from these many business activities.
  • ERP provides a continuously updated view of core business processes using common databases maintained by a database management system. ERP systems track business resources?cash, raw materials, production capacity?and the status of business commitments: orders, purchase orders, and payroll. The applications that make up the system share data across various departments (manufacturing, purchasing, sales, accounting, etc.) that provide the data.
  • This system combines with various organization system which provides error-free transactions and production. It runs on a variety of computer hardware and network configurations, typically using a database as an information source.

 

Implementation:
Generally, three types of services are available to help implement such changes are consulting, customization, and support. Implementation time depends on business size, customization, the scope of process changes. Modular ERP systems can be implemented in stages. The typical project for a large enterprise takes about 14 months and requires around 150 consultants. Small projects can require months; multinational and other large implementations can take years. Customization can substantially increase implementation times.

Besides that, information processing influences various business functions e.g. some large corporations like Wal-Mart use a just in time inventory system. This reduces inventory storage and increases delivery efficiency, and requires up-to-date data. Before 2014, Walmart used a system called Inforem developed by IBM to manage replenishment.

 

Process preparation:

Implementing ERP typically requires changes in existing business processes. Poor understanding of needed process changes prior to starting implementation is the main reason for project failure. The difficulties could be related to the system, business process, infrastructure, training, or lack of motivation.

It is therefore crucial that organizations thoroughly analyze business processes before they implement ERP software. Analysis can identify opportunities for process modernization. It also enables an assessment of the alignment of current processes with those provided by the ERP system. Research indicates that risk of business process mismatch is decreased by:

  • Linking current processes to the organization’s strategy
  • Analyzing the effectiveness of each process.
  • Understanding existing automated solutions.

 

Customization:
ERP systems are theoretically based on industry best practices, and their makers intend that organizations deploy them as is. ERP vendors do offer customers configuration options that let organizations incorporate their own business rules, but gaps in features often remain even after configuration is complete. ERP customers have several options to reconcile feature gaps, each with their own pros/cons. Technical solutions include rewriting part of the delivered software, writing a homegrown module to work within the ERP system, or interfacing to an external system.

 

Advantages:
The most fundamental advantage of ERP is that the integration of myriad business processes saves time and expense. Management can make decisions faster and with fewer errors. Data becomes visible across the organization. Tasks that benefit from this integration include:

Sales forecasting, which allows inventory optimization.

  • Chronological history of every transaction through relevant data compilation in every area of operation.
  • Order tracking, from acceptance through fulfillment
  • Revenue tracking, from invoice through cash receipt
  • Matching purchase orders (what was ordered), inventory receipts (what arrived), and costing

 

Disadvantages:

Customization can be problematic. Compared to the best-of-breed approach, ERP can be seen as meeting an organizations lowest common denominator needs, forcing the organization to find workarounds to meet unique demands.

  • Re-engineering business processes to fit the ERP system may damage competitiveness or divert focus from other critical activities.
  • ERP can cost more than less integrated or less comprehensive solutions.
  • High ERP switching costs can increase the ERP vendor’s negotiating power, which can increase support, maintenance, and upgrade expenses.
  • Overcoming resistance to sharing sensitive information between departments can divert management attention.
  • Integration of truly independent businesses can create unnecessary dependencies.
  • Extensive training requirements take resources from daily operations.
  • Harmonization of ERP systems can be a mammoth task and requires a lot of time, planning, and money.

Attacks on Smart Cards

By Author – Samata Shelare

 

When hit by an APT attack, many companies implement smart cards and/or other two-factor authentication mechanisms as a reactionary measure. But thinking that these solutions will prevent credential theft is a big mistake. Attackers can bypass these protection mechanisms with clever techniques.

Nowadays, adversaries in the form of self-spreading malware or APT campaigns utilize Pass-the-Hash, a technique that allows them to escalate privileges in the domain. When Pass-the-Hash is not handy, they will use other techniques such as Pass-the-Ticket or Kerberoasting.

What makes smart cards so special?

A smart card is a piece of specialized cryptographic hardware that contains its own CPU, memory, and operating system. Smart cards are especially good at protecting cryptographic secrets, like private keys and digital certificates.

Smart cards may look like credit cards without the stripe, but they’re far more secure. They store their secrets until the right interfacing software accesses them in a predetermined manner, and the correct second factor PIN is provided. Smart cards often hold users’ personal digital certificates, which prove a user’s identity to an authentication requestor. Even better, smart cards rarely hand over the user’s private key. Instead, they provide the requesting authenticator “proof” that they have the correct private key.

After a company is subjected to a pass-the-hash attack, it often responds by jettisoning weak or easy password hashes. On many occasions, smart cards are the recommended solution, and everyone jumps on board. Because digital certificates aren’t hashes, most people think they’ve found the answer.

In this experiment, we will perform the four most common credential theft attacks on a domain-connected PC with both smart card and 2FA enabled.

  1. Clear text password theft
  2. Pass the hash attack
  3. Pass the ticket attack
  4. Process token manipulation attack
  5. Pass the Smart Card Hash
  6. A smart card is a piece of specialized cryptographic hardware that contains its CPU, memory, and operating system.

When authenticating a user with a smart card and PIN (Personal Identification Number) code in an Active Directory network (which is 90% of all networks), the Domain Controller returns an NTLM hash. The hash is calculated based on a randomly selected string. Presenting this hash to the DC identifies you as that user.

This hash can be reused and replayed without the need for the smart card. It is stored in the LSASS process inside the endpoint memory, and its easily readable by an adversary who has managed to compromise the endpoint using tools like Mimikatz, WCE, or even just dumping the memory of the LSASS process using the Task Manager. This hash exists in the memory because its crucial for single sign-on (SSO) support.

This is how smart card login works:

  • The user inserts his smart card and enters his own PIN in a login window.
  • The smart card subsystem authenticates the user as the owner of the smart card and retrieves the certificate from the card.
  • The smart card client sends the certificate to the KDC (Kerberos Key Distribution Center) on the DC.
  • The KDC verifies the Smart Card Login Certificate, retrieves the associated user of this certificate, and builds a Kerberos TGT for that user.
  • The KDC returns encrypted TGT back to the client.
  • The smart card decrypts the TGT and retrieves the NTLM hash from the negotiation.
  • Presenting only the TGT or the NTLM hash from now on will get you authenticated.

During standard login, the NTLM hash is calculated using the users password. Because the smart card doesnt contain a password, the hash is only calculated when you set the attribute to smart card required for interactive login. GPO can force users to change their passwords periodically. This feature exposes huge persistency security risk. Once the smart card users computer is compromised, an attacker can grab the hash generated from the smart card authentication. Now he has a hash with unlimited lifetime?and worse, lifetime persistency on your domain, because the hash will never change as long as Smart Card Logon, is forced on that user.

However, Microsoft offers a solution for the smart card persistence problem: they will rotate the hashes of your smart card account every 60 days. But it is only applicable if your Domain functionality level is Windows Server 2016.

Smart cards cant protect against Pass-the-Hash and their hash ever changes.

Pass-The-2FA Hash

During authentication with some third-party 2FA, the hash is calculated from the users managed password. And because the password is managed, it is changed frequently and sometimes even immediately.

In some cases, 2FA managed to mitigate Pass-the-Hash attempts because the hash was calculated using the OTP (one-time password). Therefore, the hash wont be valid anymore, and the adversary who stole it wont be able to authenticate with it.

Other vendors like AuthLite mitigated Pass-The-Hash attempts because the cached hash of 2F sessions is manipulated by AuthLite, so stealing the hash in the memory is useless. Theres still additional verification in the DC, and the OTP must be forwarded to AuthLite before authenticating as a 2FA token.

Depending on the 2FA solution you have, you probably wont be able to Pass-the-Hash.

With their embedded microchip technology and the secure authentication they can provide, smart cards or hardware tokens have been relied upon to give physical access and the go-ahead for data transfer in a multitude of applications and transactions, in the public, corporate, and government sectors.

But, robust as they are, smart cards do have weaknesses ? and intelligent hackers have developed a variety of techniques for observing and blocking their operations, so as to gain access to credentials, information, and funds. In this article, we will take a closer look at the technology and how it is being used for Smart Card Attacks.

Smart Communications

Small information packets called Application Protocol Data Units (APDUs) are the basis of communication between a Card Accepting Device (CAD) and a smart card ? which may take the form of a standard credit card-sized unit, the SIM card for a smartphone, or a USB dongle.

Data travels between the smart card and CAD in one direction at a time, and both objects use an authentication protocol to identify each other. A random number generated by the card is sent to the CAD, which uses a shared encryption key to scramble the digits before sending the number back. The card compares this returned figure with its own encryption, and a reverse process occurs as the communication exchange continues.

Each message between the two is authenticated by a special code ? a figure based on the message content, a random number, and an encryption key. Symmetric DES (Data Encryption Standard), 3DES (triple DES) and public key RSA (Rivest-Shamir-Adleman algorithm) are the encryption methods most commonly used.

Secure, generally ? but hackers using brute force methods are capable of breaking each of these encryptions down, with enough time and sufficiently powerful hardware.

OS-Level Protection

Smart card operating systems organize their data into a three-level hierarchy. At the top, the root or Master File (MF) may hold several Dedicated Files (DFs: analogous to directories or folders) and Elementary Files (EFs: like regular files on a computer). But DFs can also hold files, and all three levels use headers which spell out their security attributes and user privileges. Applications may only move to a position on the OS if they have the relevant access rights.

Personal Identification Numbers (PINs) are associated with the Cardholder verification 1 (CHV1) and Cardholder verification 2 (CHV2) levels of access, which correspond to the user PIN allocated to a cardholder and the unblocking code needed to re-enable a compromised card.

The operating system blocks a card after an incorrect PIN is entered a certain number of times. This condition applies to both the user PIN and the unblocking code ? and while it provides a measure of security against fraud for the cardholder, it also provides malicious intruders with an opportunity for sabotage and locking a user out of their own accounts.

Host-Based Security

Systems and networks using host-based security deploy smart cards as simple carrier of information. Data on the cards may be encrypted, but the protection of it is the responsibility of the host system ? and information may be vulnerable as its being transmitted between card and computer.

Employing smart memory cards with a password mechanism that prevents unauthorized reading offers some additional protection, but passwords can still become accessible to hackers as unencrypted transmissions move between the card and the host.

Card-Based Security

Systems with the card or token-based security treat smart cards with microprocessors as independent computing devices. During interactions between cards and the host system, user identities can be authenticated and a staged protocol put in place to ensure that each card has clearance to access the system.

Access to data on the card is controlled by its own operating system, and any pre-configured permissions set by the organization issuing the card. So for hackers, the target for infiltration or sabotage becomes the smart card itself, or some breach of the issuing body which may affect the condition of cards issued in the future.

Physical Vulnerabilities

For hackers, gaining physical access to the embedded microchip on a smart card is a comparatively straightforward process.

Physical tampering is an invasive technique that begins with removing the chip from the surface of the plastic card. Its a simple enough matter of cutting away the plastic behind the chip module with a sharp knife until the epoxy resin binding it to the card becomes visible. This resin can then be dissolved with a few drops of fuming nitric acid, shaking the card in acetone until the resin and acid are washed away.

The attacker may then use an optical microscope with camera attachment to take a series of high-resolution shots of the microchip surface. Analysis of these photos can reveal the patterns of metal lines tracing the cards data and memory bus pathways. Their goal will be to identify those lines that need to be reproduced in order to gain access to the memory values controlling the specific data sets they are looking for.

 

Digital India: A Planning towards future or Enforcement of Technology

A lot has been done till date by the government of India with the Digital India plan, the idea behind this plan was definitely to just improve ourselves as to digital world and to adopt the changes as the world is changing so not to lack behind. It is mentioned in the website of digital India, that Digital India has a vision transform India into a digitally empowered society and knowledge economy. But my main concern is after the two years of the program being launched I dont see much of the people which is other than the young generation is not at all ready to run parallel to the rest India.

Digital India to them is like thrusting a piece of cake to their neck, it is good, it is healthy and even delicious but still not adaptive to the ones who dont want to eat. This is felt by most of the people in this country.

The main problem is that nobody knows what to do, how to do, and after some struggle if feels uncomfortable to them to do. The initiatives on infrastructure, services, and empowerment are really appreciable yet not reachable by most of the audience. What is needed is the consultation, which is also provided but not in the well-guided manner, which eventually makes out of no fruit.

The plans under digital India, Startup India, and Skill India also impacting great but still the thrust of hammer not enough to bend the metal, means a lot of promotion and consultation is needed to do reach out to people, ideas are endless by people for rural development and strong infrastructure and economic growth proper monitoring is needed, as a tree needed the most care when was a plant.

Digital India plan is definitely a boon for all individuals if they can utilize the opportunity, a complete description of approach and methodology for digital India program which looks very active words when you see in website but what is been acted, very little as per my knowledge.

 

Well, I am not here to just talk about everything that is dark going on in this world, many things goes positive and actually has changed after the Digital India plan. Plans which actually made an impact to the governance under the digital india are can be mentions as

  • High-speed connectivity and high-speed internet at most remote and inaccessible areas to grow the communication and connect India to the world and newer ideas. Its a National Rural Internet Mission.
  • E-Governance?Improving governance using technology. This is to improve the government to citizen interface for various service deliveries.
  • E-Kranti ?Deliver services electronically and thus in a faster and time-bound manner. This is helpful in education, healthcare, planning, security, financial inclusion, justice, farmers, etc.
  • Information for all –This will bring in transparency and accountability by easy and open access to documents and information to the citizens.
  • Electronics manufacturing ?This will encourage manufacturing of electronics in India and reduce electronics import and help in job creation too. This will help in achieving goals of Make in India initiative also.
  • Cyber Security ? Government giving now focus on the security part of the data that usually leaks and the data can be used in important hands
  • IT for jobs ?Skill India mission under the Digital India mission helping students to learn the practical and industrial level experience to enhance their performance.

After seeing all this points I think I have increased your dilemma that Digital India plan/campaign is really doing any good or was and is doing great. Well my point in this is that we are doing good, but changes always happen when you do great and with a great pace, I am much of concerned about over very slow growing and learning speed. We need to implement everything fast but at the same time made it convenient for people to use else it wont to anyone till we act on it strongly and boldly on it.

C James Yen said beautifully once that, Technical know-how of the experts must be transformed into practical do-how of the people.

Artificial Eye

By Author – Rishabh Sontakke

 

An artificial eye is a replacement for a natural eye lost because of injury or disease. Although the replacement cannot provide sight, it fills the cavity of the eye socket and serves as a cosmetic enhancement. Before the availability of artificial eyes, a person who lost an eye used to wore a patch. An artificial eye can be attached to muscles in the socket to provide eye movement.

Today, most artificial eyes are made of plastic, with an average life of about 10 years. Children require more frequent replacement of the Artificial Eye due to rapid growth changes. As many as four or five?Artificial Eye?may be required from babyhood to adulthood.

According to the Society for the Prevention of Blindness, between 10,000 and 12,000 people per year lose an eye. Though 50% or more of these eye losses are caused by an accident (in one survey more males lost their eyes to accidents compared to females), there are a number of genetic conditions that can cause eye loss or require an artificial eye. Microphthalmia is a birth defect where for some unknown reason the eye does not develop to its normal size. These eyes are totally blind, or at best might have some light sensitivity.

 

Society is an artificial construction, a defense against natures power

?

Some people are also born without one or both eyeballs. Called anophthalmia.

Retinoblastoma is a congenital (existing at birth) cancer or tumor, which is usually inherited. If a person has this condition in just one eye, the chances of passing it on are one in four or 25%.

There are two key steps in replacing a damaged or diseased eye.

–First, an?ophthalmologist?or eye surgeon must remove the natural eye. There are two types of operations.

  • The enucleation removes the eyeball by severing the muscles, which are connected to the?sclera?(white of eyeball).
  • The surgeon then cuts the optic nerve and removes the eye from the socket.

–Second, An implant is then placed into the socket to restore lost volume and to give the artificial eye some movement, and the wound is then closed.

Evisceration – In this operation, the surgeon makes an incision around the iris and then removes the contents of the eyeball. A ball made of some inert material such as plastic, glass, or silicone is then placed inside the eyeball, and the wound is closed.

Conformer ? Here the surgeon will place an (a plastic disc) into the socket. The conformer prevents shrinking of the socket and retains adequate pockets for the Artificial Eye. Conformers are made out of silicone or hard plastic. After the surgery, it takes the patient from four to six weeks to heal. The artificial eye is then made and fitted by a professional.

Raw Materials

Plastic is the main material that makes up the artificial eye. Wax and plaster of Paris are used to make the molds. A white powder called alginate is used in the molding process. Paints and other decorating materials are used to add life-like features to the prosthesis.

 

The eyes are the mirror of the soul

?

The Manufacturing?Process

The time to make an optic Artificial Eye from start to finish varies with each ocularist and the individual patient. A typical time is about 3.5 hours. Ocularists continue to look at ways to reduce this time.

There are two types of Artificial Eye.

–The very thin, shell type is fitted over a blind, disfigured eye or over an eye which has been just partially removed.

–The full modified impression type is made for those who have had eyeballs completely removed. The process described here is for the latter type.

  1. The ocularist inspects the condition of the socket.

 

  1. The ocularist paints the iris. An iris button (made from a plastic rod using a lathe) is selected to match the patient’s own iris diameter.

 

  1. Next, the ocularist hand carves a wax molding shell. This shell has an aluminum iris button embedded in it that duplicates the painted iris button. The wax shell is fitted into the patient’s socket so that it matches the irregular periphery of the socket.

 

  1. The impression is made using alginate, a white powder made from seaweed that is mixed with water to form a cream. After mixing, the cream is placed on the back side of the molding shell and the shell is inserted into the socket.

 

  1. The iris color is then rechecked and any necessary changes are made.

 

  1. A plaster-of-Paris cast is made of the mold of the patient’s eye socket. After the plaster has hardened (about seven minutes), the wax and alginate mold are removed and discarded.

 

  1. The plastic has hardened in the shape of the mold with the painted iris button embedded in the proper place.

 

  1. The prosthesis is then returned to the cast. Clear plastic is placed in the anterior half of the cast and the two halves are again joined, placed under pressure, and returned to the hot water. The Artificial Eye is finally ready for fitting.

 

The eyes tell more than the word could ever say

 

The Future ?

?

Improvements will continue in the optic Artificial Eye, which will benefit both patient and ocularist. Several developments have already occurred in recent years. Artificial Eye with two different size pupils which can be changed back and forth by the wearer was invented in the early 1980s. In the same period, a soft contact lens with a large black pupil was developed that simply lays on the corner of the artificial eye.

In 1989, a patented implant called the Bio-eye was released by the United States Food and Drug Administration. Today, over 25,000 people worldwide have benefited from this development, which is made from hydroxyapatite, a material converted from ocean coral and has both the porous structure and chemical structure of bone. In addition to natural eye movement, this type of implant has reduced migration and extrusion and prevents dropping of the lower lid by lending support to the artificial eye via a peg connection.

With advancements in computer, electronics, and biomedical engineering technology, it may someday be possible to have an artificial eye that can provide sight as well. Work is already in progress to achieve this goal, based on advanced microelectronics and sophisticated image recognition techniques.

Researchers at MIT and Harvard University are also developing what will be the first artificial retina. This is based on a?biochip?that is glued to the ganglion cells, which act as the eye’s data concentrators. The chip is composed of a tiny array of etched-metal electrodes on the retina side and a single sensor with integrated logic on the pupil side. The sensor responds to a small?infrared?laser?that shines onto it from a pair of glasses that would be worn by the artificial-retinal recipient.

Introduction to Java

By Author – Rashmita Soge

 

Java is a programming language created by James Gosling from Sun Microsystems (Sun) in 1991. The target of Java is to write a program once and then run this program on multiple operating systems. The first publicly available version of Java (Java 1.0) was released in 1995. Sun Microsystems was acquired by the Oracle Corporation in 2010. Oracle has now the steermanship for Java. In 2006 Sun started to make Java available under the GNU General Public License (GPL). Oracle continues this project called OpenJDK. Over time new enhanced versions of Java have been released. The current version of Java is Java 1.8 which is also known as Java 8.

Java is defined by a specification and consists of a programming language, a compiler, core libraries and a runtime (Java virtual machine) The Java runtime allows software developers to write program code in other languages than the Java programming language which still runs on the Java virtual machine. The Java platform is usually associated with the Java virtual machine and the Java core libraries.

What is java?

Java is a General Purpose, class-based, object-oriented, Platform independent, portable, Architecturally neutral, multithreaded, dynamic, distributed, Portable and robust interpreted Programming Language.

It is intended to let application developers “write once, run anywhere” meaning that compiled Java code can run on all platforms that support Java without the need for

History of Java

Java is the brainchild of Java pioneer James Gosling, who traces Javas core idea of, Write Once, Run Anywhere back to work he did in graduate school.

After spending time at IBM, Gosling joined Sun Microsystems in 1984. In 1991, Gosling partnered with Sun colleagues, Michael Sheridan and Patrick Naughton on Project Green, to develop new technology for programming next-generation smart appliances. Gosling, Naughton, and Sheridan set out to develop the project based on certain rules. They were specifically tied to performance, security, and functionality. Those rules were that Java must be:

  1. Secure and robust
  2. High performance
  3. Portable and architecture-neutral, which means it can run on any combination of software and hardware
  4. Threaded, interpreted, and dynamic
  5. Object-oriented

Over time, the team added features and refinements that extended the heirloom of C++ and C, resulting in a new language called Oak, named after a tree outside Goslings office.

After efforts to use Oak for interactive television failed to materialize, the technology was re-targeted for the world wide web. The team also began working on a web browser as a demonstration platform.

Because of a trademark conflict, Oak was renamed, Java, and in 1995, Java 1.0a2, along with the browser, name HotJava, was released. The Java language was designed with the following properties:

  • Platform independent: Java programs use the Java virtual machine as abstraction and do not access the operating system directly. This makes Java programs highly portable. A Java program (which is standard-compliant and follows certain rules) can run unmodified on all supported platforms, e.g., Windows or Linux.
  • Object-orientated programming language: Except the primitive data types, all elements in Java are objects.
  • Strongly-typed programming language: Java is strongly-typed, e.g., the types of the used variables must be pre-defined and conversion to other objects is relatively strict, e.g., must be done in most cases by the programmer.
  • Interpreted and compiled language: Java source code is transferred into the bytecode format which does not depend on the target platform. These bytecode instructions will be interpreted by the Java Virtual machine (JVM). The JVM contains a so-called Hotspot-Compiler which translates performance critical bytecode instructions into native code instructions.
  • Automatic memory management: Java manages the memory allocation and de-allocation for creating new objects. The program does not have direct access to the memory. The so-called garbage collector automatically deletes objects to which no active pointer exists.

How Java Works?

To understand the primary advantage of Java, you’ll have to learn about platforms. In most programming languages, a compiler generates code that can execute on a specific target machine. For example, if you compile a C++ program on a Windows machine, the executable file can be copied to any other machine but it will only run on other Windows machines but never another machine. A platform is determined by the target machine along with its operating system. For earlier languages, language designers needed to create a specialized version of the compiler for every platform. If you wrote a program that you wanted to make available on multiple platforms, you, as the programmer, would have to do quite a bit of additional work.? You would have to create multiple versions of your source code for each platform.

Java succeeded in eliminating the platform issue for high-level programmers because it has reorganized the compile-link-execute sequence at an underlying level of the compiler. Details are complicated but, essentially, the designers of the Java language isolated those programming issues which are dependent on the platform and developed low-level means to abstractly refer to these issues. Consequently, the Java compiler doesn’t create an object file, but instead it creates a bytecode file which is, essentially, an object file for a virtual machine.? In fact, the Java compiler is often called the JVM compiler. To summarize how Java works, think about the compile-link-execute cycle. In earlier programming languages, the cycle is more closely defined as “compile-link then execute”. In Java, the cycle is closer to “compile then link-execute”.

Future of Java

Java is not a legacy programming language, despite its long history. The robust use of Maven, the building tool for Java-based projects, debunks the theory that Java is outdated. Although there are a variety of deployment tools on the market, Apache Maven has by far been one of the largest automation tools developers use to deploy software applications.

With Oracles commitment to Java for the long haul, its not hard to see why Java will always be a part of programming languages for years to come and will remain as the chosen programming language. 2017 will see the release of the eighth version of Java-Java EE 8.

Despite its areas for improvement, and threat from rival programming languages like.NET, Java is here to stay. Oracle has plans for a new version release in the early part of 2017, with new supportive features that will strongly appeal to developers. Javas multitude of strengths as a programming language means its use in the digital world will only solidify. A language that was inherently designed for easy use has proved itself as functional and secure over the course of more than two decades. Developers who appreciate technological changes can also rest assured the tried-and-true language of Java will likely always have a significant place in their toolset.

GPS aircraft tracking

By Author – Samata Shelare

 

GPS aircraft tracking is used in both commercial and personal aircraft, and it comes along with a variety of benefits both to safety and convenience. What a GPS does on an aircraft in terms of tracking is a lot different than what a GPS may do in your car. GPS tracking can help to ensure your position in the sky and keep you safe while going about a day of flying.
In order to understand the benefits of GPS aircraft tracking, one will first need to understand just how it works. A device with a GPS sensor is fixed into the aircraft, and it is able to transmit real-time GPS positions of any plane to a server board located on the ground. This sensor may be placed in a number of different areas or positions on the plane depending on the specific make and model, but all sensors work similarly in tracking a planes current position at any time. These positions can then be picked up on by air traffic controllers on the ground that will be able to locate airplanes of all sizes and at all elevations, within any given area and at any given time.
GPS aircraft tracking can provide a number of benefits, even outside of the obvious benefits involving safety. The use of this type of technology can help to calculate flight times to and from any number of destinations so that pilots can get a better understanding of their time of departure compared to the time of arrival, and it can also support in the finding of an aircraft in the instance of an accident. Additionally, GPS aircraft tracking can even be used in flight schools to allow pilots in training to follow a certain path or flight plan laid out by an instructor.
There are actually about 100 air traffic facilities already using the ADS-B, which is why they are able to give such a firm estimation of 2020. This is nearly half of the 230 air traffic facilities in the world. Aviation experts believe 2020 is a good estimate for when every one of these facilities will be using the technology, with more and more adding the technology over the next 16 years. The hardest part is simply equipping the planes with the new system.
Tracking planes during their flight isnt the only thing the ADS-B GPS tracking system can do. It also has the ability to provide weather and other pertinent information to pilots in real-time, so they have an as advanced warning as possible about the current environmental conditions that might impact their flying decisions.
One of the big issues in the past with the other 130 air traffic facilities is that it is easy to lose radar in certain areas of the world. As is most likely the case with the recent missing Malaysian Airlines plane, it was likely over water or in another location not easily tracked by the ground-based radar. This makes it difficult to track and almost impossible to find out what happened to it.
Feith also mentioned that some flights require the new GPS tracking technology because of flying over the Atlantic or the Pacific Ocean and being more at risk for becoming lost during their flight.
GPS aircraft tracking is quite a bit different from the GPS technology we may use during our everyday lives in a car, but it provides the same amount of benefits when it comes to convenience, safety, and ease of navigation.

GEAR DE-BURRING MACHINE

Gear deburring is a process that has changed substantially over the past 10 years. There have been advancements in the types of tools used for deburring operations and the development of “wet” machines, automatic load and unload, automatic part transfer and turnover, and vision systems for part identification, etc.

Three types of tools are used in the gear deburring process, including grinding wheels, brushes, and carbide tools. A discussion of each method is as follows.

Grinding Wheels
There are many wheel grits available, from 320 grit for small burrs and light chamfers, to 57 grit for large burrs and heavy chamfers, with numerous grit sizes in between. Grinding wheels will usually provide the required cosmetic appearance for a deburred gear. Setting up the grinding wheel is critical for good wheel life and consistent chamfers. The point of contact for the grinding wheel should be equal to the approach angle of the grinding head. For example, set a 45 approach angle for the grinding head with a protractor. Next, draw a line through the center of the grinding wheel followed by a line drawn 45 to the first line. The contact point between the gear and the grinding wheel should be at the 45 line.

The size of the chamfer attainable is determined by the size of the burr to be removed from the part. Further, three additional factors that affect chamfer size are wheel grit size, the speed of the work spindle, and the amount of pressure applied to the part by the grinding wheel. Grinding wheel speed is noted on the grinding wheel, and it is usually 15,000 to 18,000 RPM. The grinding wheels used most often are aluminum oxide.

Brushes
Parts with small burrs can be effectively deburred with a brush. Two types of brushes are used for deburring operations, those being wire and nylon. Wire brushes are made with straight, crimped, or knotted bristles. The wire diameter and length will determine how aggressively the brush will deburr. Nylon brushes can be impregnated with either aluminum oxide or silicon carbide, with grit size ranging from 80 to 400. The specific application will determine which type of brush is required. In applications where a heavy burr is to be removed with a grinding wheel or carbide tool, a brush is often used as a secondary process for removing small burrs created by the first process.
Carbide Tools
The use of carbide deburring tools is a relatively new development. There are three advantages to using carbide tools:
? Reduced deburring time. The carbide tools can run at 40,000 RPM, vs. 15,000 to 18,000 RPM for grinding wheels.

? Reduced setup time, because there is no need to establish an approach angle as with a grinding wheel.

? Ability to deburr cluster gears, or gears having the root of the tooth close to the gear shaft or hub.
Deburring Machine Features
The deburring process is accomplished with floating-style deburring heads having variable RPM air motors or turbines. The floating heads have air-operated, adjustable counterweights for adjusting the pressure applied to the part being deburred.
The floating heads can use grinding wheels, brushes, or carbide tools, and change-over from one to the other can be accomplished in a matter of minutes, providing versatility for doing a number of different parts on one machine.
ADVANTAGES:
1. Quick action clamping.
2. Precise indexing.
3. Multi-module indexer makes all range of spur gear de-burring possible
4. Fast action de-burring due to the sequential operation of the grinding head and indexer mechanism
5. Low-cost automation.
6. The flexibility of circuit design / can be converted into the fully automatic mode with minimal circuit components.
7. Low-cost automation process
8. Saves labor cost and monotony of operation.

APPLICATIONS:
1. Machine tool manufacturing industry.
2. Agriculture machinery manufacturing.
3. Molded gear industry.
4. Timer pulley manufacturing.
5. Sprocket and chain wheel manufacturing ..etc.

4G Wi-Fi Revolution

Wi-Fi is an extremely powerful resource that connects people, business, and increasingly the Internet of Things. It is used in our homes, colleges, businesses, favorite cafes, buses, and many of our public spaces. However, it is also a hugely complex technology. Designing, deploying, and maintaining a successful WLAN is no easy task, the goal is to make that task easier for WLAN administrators of all skill levels through education, knowledge-sharing, and community participation etc.
Any malls, restaurants, hotel, and any other service station, Wi-Fi seems to be active. While supplemental downlink channels are 20MHZ, each the Wi-Fi channels could be 20MHz, 40MHz, 80MHz or even 160MHz. On many moments I had to switch off my Wi-Fi as the speed so poor & and go back to using 4G.
On my smartphone, most days I get 30/40mbps download speed and it works perfectly superb for all my needs. The only one reason that we would need higher speeds is to do a chain and use the laptop for work, watching a video, play games, listen to music, download anything that you want. Most of the people I know that they work with don’t require gigabit speed at the moment.
Once a user that is receiving high-speed data on their device using LTE-U / LAA creates a Wi-Fi hotspot, it may use the same 5GHZ channels as the once that the network is using for supplemental downlink. The user always asking why their download speed fall as soon as they switch WI-FI on.
The fact is that in a rural area & even general built-up areas, operates do not have to worry about the network being overloaded and use their licensed range. nobody is planning to place LTE-U / LAA in these areas. In the dense area and ultra areas, there are many more users, and many more wi-fi access points, ad-hoc wi-fi networks and many other sources of involvement.

Smart Home Technology

Smart-Home Technology benefits the home-owners to monitor their Houses remotely, countering dangers such as a forgotten coffee maker left on or a front door left unlocked.

Smart homes are also beneficial for the elderly, providing monitoring that can help seniors to remain at home comfortably and safely, rather than moving to a nursing home or requiring 24/7 home care.

Unsurprisingly, smart homes can accommodate user preferences. For example, as soon as you arrive home, your garage door will open, the lights will go on, the fireplace will roar and your favorite tunes will start playing on your smart speakers.

 

Home automation also helps consumers improve efficiency. Instead of leaving the air conditioning on all day, a smart home system can learn your behaviors and make sure the house is cooled down by the time you arrive home from work. The same goes for appliances. And with a smart irrigation system, your lawn will only be watered when needed and with the exact amount of water necessary. With home automation, energy, water and other resources are used more efficiently, which helps save both natural resources and money for the consumer.

However, home automation systems have struggled to become mainstream, in part due to their technical nature. A drawback of smart homes is their perceived complexity; some people have difficulty with technology or will give up on it with the first annoyance. Smart home manufacturers and alliances are working on reducing complexity and improving the user experience to make it enjoyable and beneficial for users of all types and technical levels.

For home automation systems to be truly effective, devices must be inter-operable regardless of who manufactured them, using the same protocol or, at least, complementary ones. As it is such a nascent market, there is no gold standard for home automation yet. However, standard alliances are partnering with manufacturers and protocols to ensure inter-operability and a seamless user experience.

Intelligence is the ability to adapt to change.”

Stephan Hawking

 

How smart homes work/smart home implementation

Newly built homes are often constructed with smart home infrastructure in place. Older homes, on the other hand, can be retrofitted with smart technologies. While many smart home systems still run on X10 or Insteon, Bluetooth and Wi-Fi have grown in popularity.

Zigbee and Z-Wave are two of the most common home automation communications protocols in use today. Both mesh network technologies, they use short-range, low-power radio signals to connect smart home systems. Though both target the same smart home applications, Z-Wave has a range of 30 meters to Zigbee’s 10 meters, with Zigbee often perceived as the more complex of the two. Zigbee chips are available from multiple companies, while Z-Wave chips are only available from Sigma Designs.

A smart home is not disparate smart devices and appliances, but ones that work together to create a remotely controllable network. All devices are controlled by a master home automation controller, often called a smart home hub. The smart home hub is a hardware device that acts as the central point of the smart home system and is able to sense, process data and communicate wirelessly. It combines all of the disparate apps into a single smart home app that can be controlled remotely by homeowners. Examples of smart home hubs include Amazon Echo, Google Home, Insteon Hub Pro, Samsung SmartThings and Wink Hub, among others.

Some smart home systems can be created from scratch, for example, using a Raspberry Pi or other prototyping board. Others can be purchased as a bundled?smart home kit also known as a smart home platform that contains the pieces needed to start a home automation project.

In simple smart home scenarios, events can be timed or triggered. Timed events are based on a clock, for example, lowering the blinds at 6:00 p.m., while triggered events depend on actions in the automated system; for example, when the owner’s smartphone approaches the door, the smart lock unlocks and the smart lights go on.

It involves the control and automation of lighting, heating (such as smart thermostats), ventilation, air conditioning (HVAC), and security (such as smart locks), as well as home appliances such as washer/dryers, ovens or refrigerators/freezers.WiFi is often used for remote monitoring and control. Home devices, when remotely monitored and controlled via the Internet, are an important constituent of the Internet of Things. Modern systems generally consist of switches and sensors connected to a central hub sometimes called a “gateway” from which the system is controlled with a user interface that is interacted either with a wall-mounted terminal, mobile phone software,tablet computer or a web interface, often but not always via Internet cloud services.

While there are many competing vendors, there are very few worldwide accepted industry standards and the smart home space is heavily fragmented. Manufacturers often prevent independent implementations by withholding documentation and by litigation.

 

Eye Ring

EyeRing is a wearable interface that allows using a pointing gesture or touching to access digital information about objects and the world. The idea of a micro camera worn as a ring on the index finger started as an experimental assistive technology for visually impaired persons, however soon enough we realized the potential for assistive interaction throughout the usability spectrum to children and visually-able adults as well.With a button on the side, which can be pushed with the thumb, the ring takes a picture or a video that is sent wirelessly to a mobile.

A computation element embodied as a mobile phone is in turn accompanied by the earpiece for information loopback. The finger-worn device is autonomous and wireless. A single button initiates the interaction. Information transferred to the phone is processed, and the results are transmitted to the headset for the user to hear.

Several videos about EyeRing have been made, one of which shows a visually impaired person making his way in a retail clothing environment where he is touching t-shirts on a rack, as he is trying to find his preferred color and size and he is trying to learn the price. He uses his EyeRing finger to point to a shirt to hear that it is color gray and he points to the pricetag to find out how much the shirt costs.

The researchers note that a user needs to pair the finger-worn device with the mobile phone application only once. Henceforth a Bluetooth connection will be automatically established when both are running.

The Android application on the mobile phone analyzes the image using the teams computer vision engine. The type of analysis and response depends on the pre-set mode, for example, color, distance, or currency. Upon analyzing the image data, the Android application uses a Text to Speech module to read out the information though a headset, according to the researchers.

The MIT group behind EyeRing are Suranga Nanayakkara, visiting faculty in the Fluid Interfaces group at MIT Media Lab and also a professor at Singapore University of Technology and Design; Roy Shilkrot, a first year doctoral student in the group; and Patricia Maes, associate professor and founder of the Media Labs Fluid Interfaces group.

The EyeRing in concept is promising but the team expects the prototype to evolve with more iterations to come. They are now at the stage where they want to prove it is a viable solution yet seek to make it better. The EyeRing creators say that their work is still very much a work in progress. The current implementation uses a TTL Serial JPEG Camera, 16 MHz AVR processor, Bluetooth module, 3.7V polymer Lithium-ion battery, 3.3V regulator, and a push button switch. They also look forward to a device that can carry advanced capabilities such as real-time video feed from the camera, higher computational power, and additional sensors like gyroscopes and a microphone. These capabilities are in development for the next prototype of EyeRing.

A Finger-worn Assistant The desire to replace an impaired human visual sense or augment a healthy one had a strong influence on the design and rationale behind EyeRing. To that end, we propose a system composed of a finger-worn device with an embedded camera, a computing element embodied as a mobile phone, and an earpiece for audio feedback. The finger-worn device is autonomous and wireless, and includes a single button to initiate the interaction. Information from the device is transferred to the computation element where it is processed, and the results are transmitted to the headset for the user to hear. Typically, a user would single click the pushbutton switch on the side of the ring using his thumb. At that moment, a snapshot is taken from the camera and the image is transferred via Bluetooth to the mobile phone. An Android application on the mobile phone then analyzes the image using our computer vision engine. Upon analyzing the image data, the Android application uses a Text-to-Speech module to read out the information though a hands-free head set. Users could change the preset mode by double-clicking the pushbutton and giving the system a brief verbal commands such as distance, color, currency, etc

Request a Free Estimate
Enter Your Information below and we will get back to you with an estimate within few hours
0