Tech-Forum

Emerging cyber threats you should be concerned about?

 

iStock 000028716526 MediumIn this digital world, we are obsessed with the technology. Technology is changing every second. With the emergence of technology, the threats in the cybersecurity are increasing every day.

Cybersecurity comprises technologies, processes, and controls that are designed to protect systems, networks, and data from cyber-attacks. Effective cybersecurity reduces the risk of cyber-attacks and protects organizations and individuals from the unauthorized exploitation of systems, networks, and technologies.

Consequences of a Cyber-attack

Cyber-attacks can disrupt and cause considerable financial and reputational damage to even the most resilient organization. If you suffer a cyber-attack, you stand to lose assets, reputation, and business, and potentially face regulatory fines and litigation – as well as the costs of remediation.

Currently, we are facing a lot of cyber-attacks in terms of monetary fraud and information breaches.

Now a day’s common people have no idea about the potential threats. They often get exposed to social engineering. They often give their information to any third party who is having malicious intent. As a result, they suffer the loss in terms of money, data or information.

In India, most of the information of an individual is linked with aadhar card. It provides a unique identification number to the individual. It has the thumbprint and fingerprints of a person. The bank account is also linked with aadhar card. So it is our responsibility to secure that information from the hackers.

 

To know more about hacking read 

 

rcjCyberThreat2Threats to organization

All Internet-facing organizations are at risk of attack. And it’s not a question of if you’ll be attacked, but when you’ll be attacked. The majority of cyber-attacks are automated and indiscriminate, exploiting known vulnerabilities rather than targeting specific organizations. Your organization could be being breached right now and you might not even be aware.

Up to 70% of emails today are spam, and the vast majority of these still involve phishing scams. Other common hacking threats include ransomware, malware, and Distributed denial-of-service (DDoS) attacks, all of which have been responsible for major data breaches in recent months and which can leave both company and customer data vulnerable to cyber-criminals. A massive 93% of data breaches are motivated by financial gain, according to a recent Verizon report. Hackers aim for the highest return for the least amount of effort, which is why smaller businesses with lax security are often successfully targeted.

virusscannerBanking, Financial Services and Insurance (BFSI): The BFSI sector is under growing pressure to update its legacy systems to compete with new digital-savvy competitors. The value of the customer data they hold has grown as consumers demand a more convenient and personalized service, but trust is essential. Some 50% of customers would consider switching banks if theirs suffered a cyber-attack, while 47% would “lose complete trust” in them, according to a recent study. A number of major banks around the world have already been subject to high-profile cyber-attacks suggesting that the sector needs to improve its approach to risk. Financial firms should invest in security applications that are able to adapt to the future of banking to ensure comprehensive, around-the-clock security. Shared Ledgers will feature prominently in the future of the BFSI sector, the best-known example of which is Blockchain, which forms the backbone of cryptocurrency Bitcoin. The blockchain is a database that provides a permanent record of transactions. It leaves an undisputed audit trail that can’t be tampered with, meaning it could completely transform security in the BFSI sector.

AI integrated attacks

AI is a double-edged sword as 91% of security professionals are concerned that hackers will use AI to launch even more sophisticated cyber-attacks.

 

AI can be used to automate the collection of certain information — perhaps relating to a specific organization — which may be sourced from support forums, code repositories, social media platforms and more. Additionally, AI may be able to assist hackers when it comes to cracking passwords by narrowing down the number of probable passwords based on geography, demographics and other such factors.

Telecom: There is a significant cybersecurity risk for telecom firms as carriers of internet data, and therefore a huge responsibility. Providers need to integrate cybersecurity measures into network hardware, software, applications and end-user devices in order to minimize the risk of a serious data breach, which could leave customer credentials and communications vulnerable. Consumers are increasingly careful about who they entrust their personal data to, providing a strong opportunity for networks that offer additional security services. In addition, a collaboration between rival operators could lead to greater resilience against cyber attackers.

screen shot 2015 10 27 at 9.39.25 pm

 

As technologies progress, the skills needed to deal with cyber-security needs are changing. The challenge is to train cyber-security professionals so that they can deal with threats as quickly as possible and also adapt their skills as needed. There will be some 3.5 million unfilled cyber-security roles by 2021.

Designing a logo—Creativity in graphics

 

The creative design comes in handy while working in the field of graphic design. After all, it’s too easy to lose steam in the midst of cranking out design after design. Fear not! For your inspiration needs, we’ve combed through every inch of the internet to bring you the best design existence.

imagesResearch your audience: Designing a logo is not just about creating an appealing visual. Like the overall color scheme and design of your site, your logo sets your brand apart from the competition and shows people that you’re a legitimate business. Logos are a critical part of the modern visual landscape. Throw yourself into the brand. Save all your sketches. Research online. Create mind maps or mood boards. Build a board and tear it apart. Stop with the clichés. Know your customer need. Give him/her a variety of different design to think on. Ask your customer about the message he/she want to convey via the logo. Don’t just be a designer – be a good one. Designing an effective logo is not a quick or easy process. What it requires is through research, thought, care and attention to ensure the final logo design targets the correct market and broadcasts the right message. A poorly designed logo will have a negative effect on the perception of your business; however, a carefully designed logo can transform a business by attracting the right people.

Idea generation: When working on designs you should start on paper, using numerous idea generation techniques, such as brainstorming and word mapping. This is typically a very organic process and can vary from project to project. Any idea that comes to mind will be sketched on paper to explore the full depth of potential ideas for the logo design. You should be updated with the current trend in the graphic designing. It’s wild to see dozens of design examples categorized by their inherent design trend. Each page even has a small infographic for each trend displaying popularity by year. Quite revealing!

images 1Once the idea has been explored on paper you should begin to work on the designs using software called Adobe Illustrator, which is a vector-based software program, which means the artwork produced is scalable and will never lose quality. You should continue to explore and experiment with the ideas even during this state, to ensure the idea produced is presented in its best possible light. Use vector shapes in Adobe Illustrator CC to create a logo that looks good onscreen and in print. The best part about vector art is that it scales to any size. It can be a small business card or a large billboards vector art can be resized and it will not lose its quality. Try to know about the golden circle ratio. It will help you to get a better understanding of various designs.

 

Once you have made several solid designs prepared, you will have an incubation period of at least a day, where you will not look at the designs. During this time you may consider new ideas which you would like to explore and can return to the project with a fresh perspective. Following this period you should return to the designs, refine the work where needed and select the most suitable designs to present. Add vector design in your logo and create some mind-blowing designs in order to draw your clients

magento logo11Designing the Idea and presenting:  Once designs are ready to present you should create a PDF document which will display the logo designs created, with images of the designs in real life examples, along with supporting notes explaining the decisions made. We only present designs which we are confident in, and will give you my opinion on which you believe will be most suitable for your client’s business. You should leave the final choice to your client, and if there is any possibility of improvement in design, the design could be improved or modified to better meet the objectives, changes can be made where necessary.

Finalizing the Project:   Once the designing process is completed; and we are both satisfied with the final design work, we will be finalizing the new logo files for use on web and print. You are also supposed to prepare a logo usage document to help you make the most of your new logo as your business grows, and if at any point you need help with your logo files you should be available to help and support your client ongoing.

 

Introduction to Hacking

hh1

Hacking is an attempt to exploit a computer system or a private network. It is said to be unauthorized access to someone’s computer. It is a security breach to a network or a system and extraction of some data or information without permission. Computer hacking refers to the practice of modifying or altering computer software and hardware to accomplish a goal that is considered to be outside of the creator’s original objective.

The person engaged in hacking activities is known as a hacker. This hacker may alter system or security features to accomplish a goal that differs from the original purpose of the system. The majority of hackers possess an advanced Knowledge of computer technology. The typical computer hacker will possess an expert level in a particular computer program and will have advanced abilities in regards to computer programming.

Now I am going to list down the types of hackers:

1. Script Kiddie: You can say that they are the noobs of the digital world. they don’t know real hacking nor do they care. They just copy the codes and use it for virus or SQL injection. A common Script Kiddie attack is DoSing or DDoSing (Denial of Service and Distributed Denial of Service), in which they flood an IP with so much information it collapses under the strain.

2. White Hat\ Ethical Hackers: These are the good guys of the hacker world. They often provide cybersecurity to different companies. They can help you to remove viruses. White Hat hackers hold a college degree in IT security or computer science and must be certified to pursue a career in hacking.

3. Black Hat /Crackers: They are the notorious pirates who people are scared of. They find banks or other companies with weak security and steal money or credit card information. The surprising truth about their methods of attack is that they often use common hacking practices they learned early on. They can destroy any system and extract the valuable information.

4. Gray Hat – Nothing is ever just black or white; the same is true in the world of hacking. Gray Hat hackers don’t steal money or information (although, sometimes they deface a website or two), yet they don’t help people for good (but, they could if they wanted to).

5. Green Hat– They care about hacking and strive to become full-blown hackers. They’re often flamed by the hacker community for asking many basic questions. When their questions are answered, they’ll listen with the intent and curiosity of a child listening to family stories. They are money minded hackers and they only hack for money.

6. Blue Hat – When a Script Kiddie took revenge, he/she might become a Blue Hat. Blue Hat hackers will seek vengeance on those who have made them angry. Most Blue Hats are noobs, but like the Script Kiddies, they have no desire to learn.

7. Red Hat: These are the vigilantes of this digital world. They don’t care to arrest or report black hats rather they would like to shut them for good. They use multiple aggressive methods that might force a Blackhat to need a new computer. They focus to destroy blackhat from the root.

Artificial Eye

By Author – Rishabh Sontakke

 

An artificial eye is a replacement for a natural eye lost because of injury or disease. Although the replacement cannot provide sight, it fills the cavity of the eye socket and serves as a cosmetic enhancement. Before the availability of artificial eyes, a person who lost an eye used to wore a patch. An artificial eye can be attached to muscles in the socket to provide eye movement.

Today, most artificial eyes are made of plastic, with an average life of about 10 years. Children require more frequent replacement of the Artificial Eye due to rapid growth changes. As many as four or five?Artificial Eye?may be required from babyhood to adulthood.

According to the Society for the Prevention of Blindness, between 10,000 and 12,000 people per year lose an eye. Though 50% or more of these eye losses are caused by an accident (in one survey more males lost their eyes to accidents compared to females), there are a number of genetic conditions that can cause eye loss or require an artificial eye. Microphthalmia is a birth defect where for some unknown reason the eye does not develop to its normal size. These eyes are totally blind, or at best might have some light sensitivity.

 

Society is an artificial construction, a defense against natures power

?

Some people are also born without one or both eyeballs. Called anophthalmia.

Retinoblastoma is a congenital (existing at birth) cancer or tumor, which is usually inherited. If a person has this condition in just one eye, the chances of passing it on are one in four or 25%.

There are two key steps in replacing a damaged or diseased eye.

–First, an?ophthalmologist?or eye surgeon must remove the natural eye. There are two types of operations.

  • The enucleation removes the eyeball by severing the muscles, which are connected to the?sclera?(white of eyeball).
  • The surgeon then cuts the optic nerve and removes the eye from the socket.

–Second, An implant is then placed into the socket to restore lost volume and to give the artificial eye some movement, and the wound is then closed.

Evisceration – In this operation, the surgeon makes an incision around the iris and then removes the contents of the eyeball. A ball made of some inert material such as plastic, glass, or silicone is then placed inside the eyeball, and the wound is closed.

Conformer ? Here the surgeon will place an (a plastic disc) into the socket. The conformer prevents shrinking of the socket and retains adequate pockets for the Artificial Eye. Conformers are made out of silicone or hard plastic. After the surgery, it takes the patient from four to six weeks to heal. The artificial eye is then made and fitted by a professional.

Raw Materials

Plastic is the main material that makes up the artificial eye. Wax and plaster of Paris are used to make the molds. A white powder called alginate is used in the molding process. Paints and other decorating materials are used to add life-like features to the prosthesis.

 

The eyes are the mirror of the soul

?

The Manufacturing?Process

The time to make an optic Artificial Eye from start to finish varies with each ocularist and the individual patient. A typical time is about 3.5 hours. Ocularists continue to look at ways to reduce this time.

There are two types of Artificial Eye.

–The very thin, shell type is fitted over a blind, disfigured eye or over an eye which has been just partially removed.

–The full modified impression type is made for those who have had eyeballs completely removed. The process described here is for the latter type.

  1. The ocularist inspects the condition of the socket.

 

  1. The ocularist paints the iris. An iris button (made from a plastic rod using a lathe) is selected to match the patient’s own iris diameter.

 

  1. Next, the ocularist hand carves a wax molding shell. This shell has an aluminum iris button embedded in it that duplicates the painted iris button. The wax shell is fitted into the patient’s socket so that it matches the irregular periphery of the socket.

 

  1. The impression is made using alginate, a white powder made from seaweed that is mixed with water to form a cream. After mixing, the cream is placed on the back side of the molding shell and the shell is inserted into the socket.

 

  1. The iris color is then rechecked and any necessary changes are made.

 

  1. A plaster-of-Paris cast is made of the mold of the patient’s eye socket. After the plaster has hardened (about seven minutes), the wax and alginate mold are removed and discarded.

 

  1. The plastic has hardened in the shape of the mold with the painted iris button embedded in the proper place.

 

  1. The prosthesis is then returned to the cast. Clear plastic is placed in the anterior half of the cast and the two halves are again joined, placed under pressure, and returned to the hot water. The Artificial Eye is finally ready for fitting.

 

The eyes tell more than the word could ever say

 

The Future ?

?

Improvements will continue in the optic Artificial Eye, which will benefit both patient and ocularist. Several developments have already occurred in recent years. Artificial Eye with two different size pupils which can be changed back and forth by the wearer was invented in the early 1980s. In the same period, a soft contact lens with a large black pupil was developed that simply lays on the corner of the artificial eye.

In 1989, a patented implant called the Bio-eye was released by the United States Food and Drug Administration. Today, over 25,000 people worldwide have benefited from this development, which is made from hydroxyapatite, a material converted from ocean coral and has both the porous structure and chemical structure of bone. In addition to natural eye movement, this type of implant has reduced migration and extrusion and prevents dropping of the lower lid by lending support to the artificial eye via a peg connection.

With advancements in computer, electronics, and biomedical engineering technology, it may someday be possible to have an artificial eye that can provide sight as well. Work is already in progress to achieve this goal, based on advanced microelectronics and sophisticated image recognition techniques.

Researchers at MIT and Harvard University are also developing what will be the first artificial retina. This is based on a?biochip?that is glued to the ganglion cells, which act as the eye’s data concentrators. The chip is composed of a tiny array of etched-metal electrodes on the retina side and a single sensor with integrated logic on the pupil side. The sensor responds to a small?infrared?laser?that shines onto it from a pair of glasses that would be worn by the artificial-retinal recipient.

Introduction to Java

By Author – Rashmita Soge

 

Java is a programming language created by James Gosling from Sun Microsystems (Sun) in 1991. The target of Java is to write a program once and then run this program on multiple operating systems. The first publicly available version of Java (Java 1.0) was released in 1995. Sun Microsystems was acquired by the Oracle Corporation in 2010. Oracle has now the steermanship for Java. In 2006 Sun started to make Java available under the GNU General Public License (GPL). Oracle continues this project called OpenJDK. Over time new enhanced versions of Java have been released. The current version of Java is Java 1.8 which is also known as Java 8.

Java is defined by a specification and consists of a programming language, a compiler, core libraries and a runtime (Java virtual machine) The Java runtime allows software developers to write program code in other languages than the Java programming language which still runs on the Java virtual machine. The Java platform is usually associated with the Java virtual machine and the Java core libraries.

What is java?

Java is a General Purpose, class-based, object-oriented, Platform independent, portable, Architecturally neutral, multithreaded, dynamic, distributed, Portable and robust interpreted Programming Language.

It is intended to let application developers “write once, run anywhere” meaning that compiled Java code can run on all platforms that support Java without the need for

History of Java

Java is the brainchild of Java pioneer James Gosling, who traces Javas core idea of, Write Once, Run Anywhere back to work he did in graduate school.

After spending time at IBM, Gosling joined Sun Microsystems in 1984. In 1991, Gosling partnered with Sun colleagues, Michael Sheridan and Patrick Naughton on Project Green, to develop new technology for programming next-generation smart appliances. Gosling, Naughton, and Sheridan set out to develop the project based on certain rules. They were specifically tied to performance, security, and functionality. Those rules were that Java must be:

  1. Secure and robust
  2. High performance
  3. Portable and architecture-neutral, which means it can run on any combination of software and hardware
  4. Threaded, interpreted, and dynamic
  5. Object-oriented

Over time, the team added features and refinements that extended the heirloom of C++ and C, resulting in a new language called Oak, named after a tree outside Goslings office.

After efforts to use Oak for interactive television failed to materialize, the technology was re-targeted for the world wide web. The team also began working on a web browser as a demonstration platform.

Because of a trademark conflict, Oak was renamed, Java, and in 1995, Java 1.0a2, along with the browser, name HotJava, was released. The Java language was designed with the following properties:

  • Platform independent: Java programs use the Java virtual machine as abstraction and do not access the operating system directly. This makes Java programs highly portable. A Java program (which is standard-compliant and follows certain rules) can run unmodified on all supported platforms, e.g., Windows or Linux.
  • Object-orientated programming language: Except the primitive data types, all elements in Java are objects.
  • Strongly-typed programming language: Java is strongly-typed, e.g., the types of the used variables must be pre-defined and conversion to other objects is relatively strict, e.g., must be done in most cases by the programmer.
  • Interpreted and compiled language: Java source code is transferred into the bytecode format which does not depend on the target platform. These bytecode instructions will be interpreted by the Java Virtual machine (JVM). The JVM contains a so-called Hotspot-Compiler which translates performance critical bytecode instructions into native code instructions.
  • Automatic memory management: Java manages the memory allocation and de-allocation for creating new objects. The program does not have direct access to the memory. The so-called garbage collector automatically deletes objects to which no active pointer exists.

How Java Works?

To understand the primary advantage of Java, you’ll have to learn about platforms. In most programming languages, a compiler generates code that can execute on a specific target machine. For example, if you compile a C++ program on a Windows machine, the executable file can be copied to any other machine but it will only run on other Windows machines but never another machine. A platform is determined by the target machine along with its operating system. For earlier languages, language designers needed to create a specialized version of the compiler for every platform. If you wrote a program that you wanted to make available on multiple platforms, you, as the programmer, would have to do quite a bit of additional work.? You would have to create multiple versions of your source code for each platform.

Java succeeded in eliminating the platform issue for high-level programmers because it has reorganized the compile-link-execute sequence at an underlying level of the compiler. Details are complicated but, essentially, the designers of the Java language isolated those programming issues which are dependent on the platform and developed low-level means to abstractly refer to these issues. Consequently, the Java compiler doesn’t create an object file, but instead it creates a bytecode file which is, essentially, an object file for a virtual machine.? In fact, the Java compiler is often called the JVM compiler. To summarize how Java works, think about the compile-link-execute cycle. In earlier programming languages, the cycle is more closely defined as “compile-link then execute”. In Java, the cycle is closer to “compile then link-execute”.

Future of Java

Java is not a legacy programming language, despite its long history. The robust use of Maven, the building tool for Java-based projects, debunks the theory that Java is outdated. Although there are a variety of deployment tools on the market, Apache Maven has by far been one of the largest automation tools developers use to deploy software applications.

With Oracles commitment to Java for the long haul, its not hard to see why Java will always be a part of programming languages for years to come and will remain as the chosen programming language. 2017 will see the release of the eighth version of Java-Java EE 8.

Despite its areas for improvement, and threat from rival programming languages like.NET, Java is here to stay. Oracle has plans for a new version release in the early part of 2017, with new supportive features that will strongly appeal to developers. Javas multitude of strengths as a programming language means its use in the digital world will only solidify. A language that was inherently designed for easy use has proved itself as functional and secure over the course of more than two decades. Developers who appreciate technological changes can also rest assured the tried-and-true language of Java will likely always have a significant place in their toolset.

Multi-factor authentication (MFA)

Multi-factor authentication(MFA)

Multi-factor authentication is a method of computer access control in which user granted access only after successfully presenting several separate pieces of evidence for authentication mechanism- typically at least two of the following categories: knowledge(something they know), possession(something they have), and inherence(something they are).

Two-factor authentication

It is a combination of two different components.
Two-factor authentication is a type of multi-factor authentication.

A good example from everyday life is the withdrawing of money from an ATM; only the correct combination of the bank card (something that the user possesses) and a PIN (personal identification number, something that the user knows) allows the transaction to be carried out.

 

The authentication factors of a multi-factor authentication scheme may include:

  • Some physical object in the possession of the user, such as a USB stick with a secret token, a bank card, a key, etc.
  • Some secret known to the user, such as a password, PIN,TAN, etc.
  • Some physical characteristic of the user (biometrics), such as a fingerprint, eye iris, voice, typing speed, pattern in key press intervals, etc.

 

Knowledge factors

Knowledge factors are the most commonly used form of authentication. In this form, the user is required to prove knowledge of a secret in order to authenticate.

A password is a secret word or string of characters that is used for user authentication. This is the most commonly used mechanism of authentication. Many multi-factor authentication techniques rely on password as one factor of authentication.Variations include both longer ones formed from multiple words (a passphrase) and the shorter, purely numeric ,personal identification number (PIN) commonly used for ATM?access. Traditionally, passwords are expected to be memorized.

Possession factors

Possession factors (“something only the user has”) have been used for authentication for centuries, in the form of a key to a lock. The basic principle is that the key embodies a secret which is shared between the lock and the key, and the same principle underlies possession factor authentication in computer systems. A security token is an example of a possession factor.

Disconnected tokens

Disconnected tokens have no connections to the client computer. They typically use a built-in screen to display the generated authentication data, which is manually typed in by the user.

Connected tokens

Connected tokens are devices that are physically connected to the computer to be used. Those devices transmit data automatically.There are a number of different types, including card readers, wireless tags and USB tokens.

Inherence factors

These are factors associated with the user, and are usually bio-metric methods, including fingerprint readers, retina scanners or voice recognition.

 

On-screen fingerprint sensor

First on-screen fingerprint sensor –

The world’s first phone with a fingerprint scanner built into the display was as awesome as I hoped it would be.

There’s no home button breaking up your screen space, and no fumbling for a reader on the phone’s back. I simply pressed my index finger on the phone screen in the place where the home button would be. The screen registered my digit, then spun up a spiderweb of blue light in a pattern that instantly brings computer circuits to mind. I was in.

Such a simple, elegant harbinger of things to come: a home button that appears only when you need it and then gets out of the way.

How in-display fingerprint readers work

In fact, the fingerprint sensor — made by sensor company Synaptics — lives beneath the 6-inch OLED display. That’s the “screen” you’re actually looking at beneath the cover glass.

When your fingertip hits the target, the sensor array turns on the display to light your finger, and only your finger. The image of your print makes its way to an optical image sensor beneath the display.

It’s then run through an AI processor that’s trained to recognize 300 different characteristics of your digit, like how close the ridges of your fingers are. It’s a different kind of technology than what most readers use in today’s phones.

Because the new technology costs more to make, it’ll hit premium phones first before eventually making its way down the spectrum as the parts become more plentiful and cheaper to make.

Vivo’s phone is the first one we’ve gotten to see with the tech in real life.

Vivo’s been working on putting a fingerprint sensor underneath the screen for the last couple of years, and now it’s finally made one that’s ready for production.

The company had already announced last year it had developed the “in-display fingerprint scanning” technology for a prototype phone. That version used an ultra-sonic sensor and was created with support from Qualcomm.

The new version of the finger-scanning tech is optical-based and was developed with Synaptics. In a nutshell, how the technology works is the phone’s OLED display panel emits light to illuminate your fingerprint. Your lit-up fingerprint is then reflected into an in-display fingerprint sensor and authenticated.

It’s really nerdy stuff? all you really need to know is that phones with fingerprint sensors on the front are back again. And this time, without thick bezels above and below the screen.

 

 

 

Should e-sports come to Olympics ?

Should e-sports come to Olympics ?

For those who have been spending hours or rather wasting hours sitting in front of a screen all day and explaining the world how e-sports will make them famous one day are definitely in for a disappointment. That being said, let us also acknowledge that people are earning in the world of e-sports and if reports are to be believed e-sports earnings are at an all time high. Twelve of the e-sports are paying as much as $1 million as prize amount. This makes us wonder if e-sports should already be welcomed into Olympics. It was being speculated for quite some time that the growth of e-sports would make it go all the way to Olympics sooner than later. However, recently International Olympic Committee President Thomas Bach suggested otherwise and received huge rebuke from video game fanatics all over the world.

A career already

E-sports are already becoming a prime time profession, with lead players of the most popular games earning tons of money. These sports are also gaining massive support as well as sponsorship by top companies like BMW. It is evident that youth is and will be attracted to it. Why not make it official already by introducing it in Olympics?

Physical fitness

Apart from the fact that the spirit of Olympics have always been about physical strength, mortar reflexes and fitness, it is worth noting that e-sports makes a person extremely lazy, in most cases obese and sometimes causes injury to hands and fingers of those who play them for prolonged period of time. This is absolutely against the spirit of Olympics.

More viewers

More viewers and more sponsorship would be attracted to Olympics if e-sports could be given a place. It would eventually lead to better prize money to athletes.

Violence

Even if e-sports would make it to Olympics, fans would be disappointed as the most popularly played video games would not be played. They are always violent, including explosions and killing.
It would be better if e-sports would rather have a separate event of their own like e-Olympics instead of mixing it up with the existing culture of sports at Olympics.

Net Neutrality

netneutrality_fbimage

Net Neutrality-
It is the principle that Internet Service Provider must treat all data on the Internet the same, and not discriminate or charge differently by user, content, website, platform, application, type of attached equipment, or method of communication. For instance, under these principles, internet service providers are unable to intentionally block, slow down or charge money for specific websites and online content.

History-
The term was coined by professor Tim Wu in 2003, which was used to describe the role of telephone systems.
An example of a violation of net neutrality principles was the Internet service provider Comcast’s secret slowing (“throttling”) of uploads from peer-to-peer file sharing (P2P) applications by using forged packets. Comcast did not stop blocking these protocols, like BitTorrent, until the Federal Communications Commission ordered them to stop. In another minor example, The Madison River Communications company was fined US$15,000 by the FCC, in 2004, for restricting their customers’ access to Vonage, which was rivaling their own services. AT&T was also caught limiting access to FaceTime, so only those users who paid for AT&T’s new shared data plans could access the application. In July 2017, Verizon Wireless was accused of throttling after users noticed that videos played on Netflix and YouTube were slower than usual, though Verizon commented that it was conducting “network testing” and that net neutrality rules permit “reasonable network management practices”.

Open Internet

Under an “open Internet” schema, the full resources of the Internet and means to operate on it should be easily accessible to all individuals, companies, and organizations.
Applicable concepts include: net neutrality, open standards, transparency, lack of Internet Censorship, and low barriers of entry. The concept of the open Internet is sometimes expressed as an expectation of decentralized technological power, and is seen by some observers as closely related to open-source software, a type of software program whose maker allows users access to the code that runs the program, so that users can improve the software or fix bugs.
Proponents of net neutrality see this as an important component of an “open Internet”, wherein policies such as equal treatment of data and open web standards allow those using the Internet to easily communicate, and conduct business and activities without interference from a third party.
In contrast, a “closed Internet” refers to the opposite situation, wherein established persons, corporations, or governments favor certain uses, restrict access to necessary web standards artificially degrade some services, or explicitly filter out content. Some countries block certain websites or types of sites, and monitor and/or censor Internet use using Internet Speed, a specialized type of law enforcement, or secret police.

Traffic shaping

Traffic shaping is the control of computer network traffic to optimize or guarantee performance, improve latency (i.e., decrease Internet response times), and/or increase usable bandwidth by delaying packet that meet certain criteria. In practice, traffic shaping is often accomplished by “throtting” certain types of data, such as streaming video or P2P file sharing. More specifically, traffic shaping is any action on a set of packets (often called a stream or a flow) which imposes additional delay on those packets such that they conform to some predetermined constraint (a contract or traffic profile). Traffic shaping provides a means to control the volume of traffic being sent into a network in a specified period (bandwidth throttling), or the maximum rate at which the traffic is sent (rate), or more complex criteria such as generic cell rate algorithm

Legal enforcement of net neutrality principles takes a variety of forms, from provisions that outlaw anti-competitive blocking and “throttling” of Internet services, all the way to legal enforcement that prevents companies from subsidizing Internet use on particular sites. Contrary to popular rhetoric and statements by various individuals involved in the ongoing academic debate, research suggests that a single policy instrument (such as a no-blocking policy or a quality of service ?policy) cannot achieve the range of valued political and economic objectives central to the debate. As Bauer and Obar suggest, “safeguarding multiple goals requires a combination of instruments that will likely involve government and non-government measures. Furthermore, promoting goals such as the freedom of speech, political participation, investment, and innovation calls for complementary policies.”.

PRATYUSH

On Monday 08-01-18,
India unveiled its fastest supercomputer ‘Pratyush’ which is an array of computers that can deliver a peak power of 6.8 petaflops. One petaflop is a million billion floating point operations per second and is a reflection of the computing capacity of a system.
According to reports of Indian Institute of Tropical Meteorology (IITM), Pratyush is the fourth fastest supercomputer in the world which is designed for weather and climate research. It will also upgrade an Indian supercomputer from the 300s to the 30s in the Top500 list, a respected international tracker of the worlds fastest supercomputers.
The government had sanctioned last year 400 crore in order to put in place a 10-petaflop machine. The main functionality of this supercomputer would be monsoon forecasting with the help of a dynamic model. This requires simulating the weather for a given month and letting a custom-built model calculate how the actual weather will play out over June, July, August, and September. This new system would provide wings to the technology and it would be possible to map regions in India at a resolution of 3 km and the globe at 12 km.
The machines will be installed at two government institutes: 4.0 petaflops HPC facility at IITM, Pune; and 2.8 petaflops facility at the National Centre for Medium-Range Weather Forecast, Noida.
The sole purpose of installing such a high-capacity supercomputer in India is to accelerate the weather forecasting in the country, primarily before the arrival of Monsoon season in India. In addition, Pratyush will monitor the onset of other natural calamities such as floods and Tsunami in the country. As a matter of fact, farmers will get a big relief as the unprecedented rainy season in India often results in a bad annual crop production.
This increase in supercomputing power will go a long way in delivering various societal applications committed by MoES. This will also give a fillip to research activities not only in MoES but also in other academic institutions working on various problems related to Earth Sciences, said IITM in its release.

 

INDIA’S OTHER SUPERCOMPUTERSWith Pratyush, India makes its way into the list of top 30 supercomputers in the world. As of June 2017, following systems of India were on the list of top 500 supercomputing systems:

  • SahasraT (SERC – Cray XC40) installed at Indian Institute of Science (ranked 165)
  • Aaditya (iDataPlex DX360M4) installed at Indian Institute of Tropical Meteorology (ranked 260)
  • TIFR – Cray XC30 installed at Tata Institute of Fundamental Research (ranked 355)
  • HP Apollo 6000 Xl230/250 installed at Indian Institute of Technology Delhi (ranked 391)
Request a Free Estimate
Enter Your Information below and we will get back to you with an estimate within few hours
0