Thursday, December 22, 2022

Android Toolkit

 


 

ANDROID TOOLKIT:

The evolution of mobile phones comes with its complementary networks like 2G, GSM, UMTS, CDMA, GPRS, 3G, 4G and now 5G. Which further enhances the network stability, functionality and coverage. Now with this advanced network, only compatible handset or phones work. These smart phones are sleek and stylish in design, complex to program, interfacing functionality, sensors, health measures, informative with various OS options. Even phones can dual-boot. We can program, change OS, make dual-boot these phones. These phones come with regular OTA updates from their vendors as well from Linenette phone Come under AOSP (Android Opensource Project) with Custom or Stock ROM option. We can define or

Customize Custom ROMs. While Stock ROMs are company's (GO OGLE) redefined ROMs.  These phones are A/B update partitioned. Where, if A partition is active then B partition will boot into for new updates. There are number of partitions like boot, system, vendor, radio etc. All these partitions serve different means.

boot: The boot partition contains a kernel image and a RAM disk combined via mkbootimg. In order to flash the kernel directly without flashing a new boot partition, a virtual partition can be used:

kernel: The virtual kernel partition overwrites only the kernel (zimage, zImage-dtb, Image.gz-dtb) by writing the new image over the old one. To do this, it determines the start location of the existing kernel image in eMMC and copies to that location, keeping in mind that the new kernel image may be larger than the existing one. The bootloader can either make space by moving any data following it or abandoning the operation with an error. If the development kernel supplied is incompatible, you may need to update the dtb partition if present, or vendor or system partition with associated kernel modules.

ramdisk: The virtual ramdisk partition overwrites only the RAM disk by writing the new image over the old one. To do this, it determines the start location of the existing ramdisk.img in eMMC and copies to that location, keeping in mind that the new RAM disk maybe be larger than the existing one. The bootloader can either make space by moving any data following it or abandon the operation with an error. system: The system partition mainly contains the Android framework.

recovery: The recovery partition stores the recovery image, booted during the OTA process. If the device supports A/B updates, recovery can be a RAM disk contained in the boot image rather than a separate image.

userdata: The userdata partition contains user-installed applications and data, including customization data.

metadata: The metadata partition is used when device is encrypted and is 16MB or larger.

vendor: The vendor partition contains any binary that is not distributable to the Android Open-Source Project (AOSP). If there is no proprietary information, this partition may be omitted.

radio: The radio partition contains the radio image. This partition is only necessary for devices that include a radio that have radio-specific software in a dedicated partition.

tos: The tos partition stores the binary image of the Trusty OS and is only used if the device includes Trusty.

Flow:

       Here is how the bootloader operates:

       The bootloader gets loaded first.

       The bootloader initializes memory.

       If A/B updates are used, determine the current slot to boot.

       Determine whether recovery mode should be booted instead as described     in Supporting updates.

       The bootloader loads the image, which contains the kernel and RAM disk (and in Treble even more).

       The bootloader starts loading the kernel into memory as a self-executable compressed binary.

       The kernel decompresses itself and starts executing into memory.

       From there on, older devices load init from the RAM disk and newer devices load it from the /systempartition.

       From /system, init launches and starts mounting all the other partitions, such as /vendor, /oem, and/odm, and then starts executing code to start the device

Images

The bootloader relies upon these images.

Kernel images

Kernel images are created in a standard Linux format, such as zImage, Image, or Image.gz. Kernel images can be flashed independently, combined with RAM disk images, and flashed to the boot partition or booted from memory. When creating kernel images, concatenated device-tree binaries are recommended over using a separate partition for the device tree. When using multiple Device Tree Blobs (DTBs) for different board revisions, concatenate multiple DTBs in descending order of board revision.

RAM disk images

RAM disks should contain a root file system suitable for mounting as a rootfs. RAM disk images are combined with kernel images using mkbootfs and then flashed into the boot partition.

Boot images

Boot images should contain a kernel and RAM disk combined using an unmodified mkbootimg.

The mkbootimg implementation can be found at: system/core/mkbootimg

The bootloader reads the bootimg.h header file generated by mkbootimg and updates the kernel header to contain the correct location and size of the RAM disk in flash, base address of the kernel, command line parameters, and more. The bootloader then appends the command line specified in the boot image to the end of the bootloader-generated command line.

File system images (system, userdata, recovery)

YAFFS2 image format

If using raw NAND storage, these images must be YAFFS2, generated by an unmodified mkyaffs2image, as found in the Android Open-Source Project (AOSP)

TWRP:  

Team Win Recovery Project

The main method of installing ("flashing") this custom recovery on an Android device requires downloading a version made specifically for the device, and then using a tool such as Fastboot or Odin. Also, some custom ROMs come with TWRP as the default recovery image.

TWRP gives users the option to fully back up their device (including bootloader, system data, private applications, etc.) to revert to at any time, and a built-in file manager to delete files that may be causing problems on the device or add some to fix issues.

TWRP supports the installation of custom ROMs (i.e., custom operating systems such as LineageOS, or the latest Android release), kernels, add-ons (Google Apps, Magisk, themes, etc.), and other various mods.

Wiping, backing up, restoring, and mounting various device partitions, such as the system, boot, userdata, cache, and internal storage partitions are also supported. TWRP also features file transfer via MTP, as well as a basic file manager, and a terminal emulator. It is fully theme.

Magisk:

Magisk is a suite of open-source software for customizing Android, supporting devices higher than Android 5.0. Some highlight features. We can install modules to do just about anything you want senselessly, which means they won't permanently overwrite your system files. Once you uninstall them and reboot, you're right back to stock. 

GSI:

Gneric System Image is reachitect of android framework. There are number of Stock ROMs like, Linegae OS, Revenge OS, Bliss ROM, Evolution X etc.

ADB & Fastboot:

ADB and Fastboot are utilities that unlock access to the Android system while your phone is connected to a desktop computer via a USB cable. The computer and cable are integral to this—there's no app version, and while you can use ADB wirelessly, it's much more complicated to set up. Fastboot is a protocol and a tool of the same name. It is included with the Android SDK package used primarily to modify the flash filesystem via a USB connection from a host computer. It requires that the device be started in Fastboot mode.

 

GitHub:

GitHub is a code hosting platform for version control and collaboration. It lets you and others work together on projects from anywhere. It is resourceful tree of repositories, branches, commits, and pull requests. GitHub is a Git repository hosting service, but it adds many of its own features. While Git is a command line tool, GitHub provides a Web-based graphical interface. It also provides access control and several collaboration features, such as a wikis and basic task management tools for every project. 

XDA:

It is software development community of android. XDA Android, XDA Treble are responsible Android working mobile phone working environments by XDA.


Coding

  


CODING:

The set of instruction set which communicates with computer hardware to generate logical input or output is coding. It is broadly defined as programming. There are basically two kinds of programming. 1. Software 2. Device or Hardware

There are several key programming languages like 

       Python

Python is programming essential as arrival of TensorFlow and wide range for selection of libraries. it is widely accepted by data scientist and machine learning programmers.

       Perl

Perl is Powerful, stable, portable, and mature, Perl is one of the most feature-rich programming languages with over three decades of development. Perl is portable and cross-platform. Currently, Perl can run on over 100 platforms. Perl is good for both mission-critical large-scale projects and rapid prototyping. It is used for a variety of purpose including web development, GUI development, system administration and many more. For web development, Perl CGI is used. CGI is the gateway which interacts with the web browser and Perl in a system. It supports most of the operating systems and is listed in Oxford English dictionary. Its concepts and syntax are taken from many languages like awk, bourne shell, C, sed and even English. Perl is an interpreted language. When a Perl program run, it is first compiled into a byte code, then it is converted into machine instructions. So, writing something in Perl instead of C saves your time. 

       Java

It is easy-to-use programming language that furnishes basic debugging processes, graphical representation of data, huge package services, better user interaction, and work simplification in large projects. Java is viewed as a safe language because of its utilization of bytecode and sandboxes.

       C++

It so one of the oldest programming languages. This language provides coding with data integrity and security. This language provides wide scope for programming with classes, like Public, Private and Protected. And data security with function like Data Encryption and Encapsulation. It facilitates Object Oriented Programming (OOP). C++ can help make quick and well-coded algorithms. 

       CSS

CSS or Cascading Style Sheets is known as a programming language that simplifies the process of transforming the look of web pages. It manages the looks and feels of a web page for retaining customer engagement, especially for the retail industry. Developers can control text colors, sizes, fonts, spaces, background, and many more, according to different devices

        •     R

R is meant for high-level statistics and data visualization. For any individual who needs to comprehend the mathematical computations associated with machine learning or insights, this is the best programming language for you. 

FPGA & ASIC: 

With this we have hardware programming languages like HDL (Hardware Description Language), VHDL, VERILOG and System Verilog. They include a means of describing propagation time and signal strengths. VHDL and Verilog implement registertransfer-level (RTL) abstractions. It Provides higher level of abstraction with RTL simulators. System Verilog was developed to provide an evolutionary path from VHDL and Verilog to support the complexities of SoC designs. It’s a bit of a hybrid— the language combines HDLs and a hardware verification language using extensions to Verilog, plus it takes an object-oriented programming approach. System Verilog includes capabilities for testbench development and assertion-based formal verification. VHDL is based on Ada programming language and is dedicated for Very High-Speed Integrated Circuits (VHSIC), started as a modeling of digital and analog signal circuits. FPGA and ASIC has applications specific hardware and so there may be number of different hardware programing languages.

In VLSI system we have different types hardware like microprocessor, Microcontroller, CPU, RAM, ROM and SoC etc. These products are identical in their nature with its system architecture, capacity, functionality, properties etc. The chips are coming with their vendor programmer like Intel, Xilinx, Cadence, Siemens, etc.

 

Firmware:

Firmware or device programing has very large scope in embedded system. Where embedded system has very wide field with range of products & their application for development. Generally, C and C++ language is used for firmware programming. if we have hardware as circuit, Web App, Mobile App as software and if we are using microcontroller or processor in an electronic circuit, is firmware.

 

 

 

 

 


Hyperloop TT

 


The HyperloopTT is High speed public transportation system between far end stations. The system aims to reduce the transportation time by half and with economic transportation fare.

The HyperloopTT system has a low implementation cost compared to other high-speed transportation methods. As a civil infrastructure project covering long distances, there will be segments that are above ground, at grade, and below ground, optimizing to meet unique local conditions.

 HLTT is bringing Signaling logics into the cloud can create a more reliable, sustainable and long-lasting system for capsules that will travel as fast as planes\

By replacing the capabilities of complex physical equipment with cloud-based software, the solution offers greater reliability, greater flexibility in deployment, cuts maintenance costs and is more sustainable. The simulator can also help to make HyperloopTT more efficient by automating repetitive tasks and detecting and managing potential disruptions, instead of reacting to events as they occur.

Signaling technology, to simulate the regulation and control of capsules moving at very high speeds. ERTMS has the benefit of being used and recognized internationally, making it highly interoperable, thereby allow HyperloopTT systems to operate safely across the world without the need to create new standards.

Having completed the simulation model the next step in the process would be to digitally integrate both the signaling infrastructure and the cloud-based model for the physical capsules. This would open the door to moving to physical testing of the whole system at HyperloopTT test track in Toulouse.

Hitachi Rail and HyperloopTT have achieved an important milestone towards the commercial running of the innovative system – that will be able to run at speeds of up to 1,200km/h – with the completion of proof of concept for a cloud-based ERTMS signaling system for HyperloopTT capsules.

Hitachi Rail's digital signaling technology is used in the USA & Canada, Europe, the Middle East, Australia and Asia to help safely move millions of passengers every day.

Hitachi Rail and HLTT partnership allows to evolve best-in-class signaling and automation systems and to customize it for HyperloopTT super high-speed transport. We are excited about this achievement and are looking ahead to the next stage of the programmed.”

the objective is to integrate hyperloop’s capsule traveling system with Hitachi Rail’s industry-leading signaling technology, ERTMS (European Rail Traffic Management System). Hitachi Rail is a global industry leader in digital signaling for high-speed rail and is the first provider to introduce ERTMS technology in Europe – in the UK, Italy, Spain, Sweden, and France – and in the highly competitive markets of China and India.

The Core technology behind high-speed transportation is Magnetic Levitation. The System incorporates with this technology includes,

 

1. Capsule

In a size similar to a small commercial aircraft without wings, hyperloop’s pressurized capsules float on a frictionless magnetic cushion within the tubes.

Our capsules are engineered and designed for ultra-high speeds using cutting-edge composite materials and safety features. HyperloopTT developed a Vibranium™, smart material with sensors embedded between carbon-fiber, fuselage skin to monitor and transmit critical information regarding temperature, stability and integrity, all wirelessly and instantly.

 

2.Infrastructure

The HyperloopTT system has a low implementation cost compared to other high-speed transportation methods. As a civil infrastructure project covering long distances, there will be segments that are above ground, at grade, and below ground, optimizing to meet unique local conditions.

The HyperloopTT system reduces the environmental cost of a large-scale infrastructure project by integrating solar panels and other renewable energy sources to create a net energy positive system that aims to generate more energy than it utilizes. The harnessing of renewable energy also lowers operational costs. The system operates in a low-pressure, fully enclosed environment, eliminating traditional hazards from weather and traffic crossings, significantly improving efficiency and reliability.

 

3.Station

The future is now boarding

Our station is designed around the passenger. Every moment along the HyperloopTT journey is engineered to deliver a frictionless experience with digital ticketing, biometric check-in, wayfinding, and an on-demand boarding system.

HyperloopTT stations are specifically designed for local environments. A transit-oriented development, the station integrates existing first and last-mile solutions, while creating a dynamic space where passengers can access goods, on-demand services and experiences. Stations are designed as community hubs that reflect the local culture and provide significant value to surrounding neighborhoods and passengers.

 

4.Vaccum

A whole new atmosphere

The low-pressure environment inside the tube is achieved through a specially designed HyperloopTT vacuum unit. Co-developed with Leybold, the inventor of the vacuum pump, the unit fits within a standard shipping container to offer a plug-and-play solution. The system is optimized to achieve and maintain low pressure in the tubes while minimizing energy consumption and maximizing operational uptime. The containers will be located along the route every 6.2 miles.

With the air inside the tube drastically reduced, the capsule can achieve high speeds with less energy consumption.

 

5.Levitation (Passive Magnetic Levitation)

Elevating transport

Our proprietary passive magnetic levitation technology called Inductrack™ is a game-changer for high-speed transportation. The magnets are arranged in a Halbach array configuration, enabling capsule levitation over an unpowered but conductive track. As capsules move through the low-pressure environment, they use very little energy on route thanks to the reduced drag forces.

Should there ever be a power failure, the capsule will automatically slow down and settle on its auxiliary wheels at low speed. The Inductrack™ system was tested and validated on a full-scale passive levitation track. HyperloopTT then improved the technology and optimized it for a low-pressure environment through testing in our prototype.

 


Cyber Security

 




 

CYBER SECURITY

1.       Introduction

It is about our world we have built it time to time to time. How Secure, Vulnerable, Strong, Predictive, Adaptive, Sensible and futuristic it is. We have bit history behind full of rich culture, varieties of communities, Science & Astronomy. There are some mathematics algorithms are yet to solve and some we have to modify according to requirements. We are carrying all these data to cast our new future world. The word CYBER where our today’s world is living and exists. The world of Data, Network and Connectivity.

 

2.       Information Protection

We have number of computers and square the number of service users are connected to the internet network using PC, mobile, embedded computer, i-pods etc. Here we need to protect Internet services, Product services, User database etc. It is general user-based internet service.

We have different kind Industries, Organizations, Institutions, Government Building, Military and Scientific Research facilities, Power stations, Air, Railway and Road Network, Banks, Stock exchanges, Public-places, Shopping- Malls etc. all are connected through internet network. Where we require uninterrupted power supply, internet connectivity and network security. As large number of data is moving in bound and out bound. So, we require data security also.

We have Information and Technology, Computer Science where we can study and prepare Roadmap to protect our internet-based infrastructure. Here we have LAN, WAN, MAN different kind of internet and intranet network used in above industries. Which can be compromised easily without protection. We should have separate budget for cyber security.

 

3.       Cyber Landscape

When we talk about cyberspace it looks like it is prepared with social interaction aspects with less security measures. In cyberspace more freedom means less security of identity, data and privacy. We should treat this cyberspace as the nexus that allows for the potential and very real connections among organized crime, terrorist, hackers, foreign intelligence agencies, military and civilians.

The mistake of assuming security is someone else’s problem often comes with tragic consequences. It is not the responsibility of engineers, consultants, IT professionals or even management to undertake alone, but is the responsibility of every user. Granted, there are many specific roles required in security planning, but if the plan does not include each and every user as a member of the security team, it will be doomed before it has even been implemented.

cyber-security planning. Understanding the possible motivations and means behind a cyber-attack can better equip enterprises to prepare for and respond to an attack. By implementing Governance, Risk Management and Compliance (GRC) measures across the enterprise, we can overcome cyber-attacks.

 

4.       Security Arena

The big and super power countries and their organizations face cyber-attacks and receives cyber-attack threats on regular basis. They have to spend more time and money to compromise these threats even for their little work. And this way this little work becomes more important.

The cyber-security arena has expanded dramatically. Cyber-security now includes mobile phones, embedded computers (widely employed in our infrastructure), cloud computing, and all types of data storage. And cyber-crime has become a business, operating without borders, and has become increasingly difficult to arrest

5.       Cyber-attacks: Inspiration with Benefits

Here every system is a target. Information is one of our most valuable assets and wherever it is stored, transmitted or processed it becomes a target for cyber-attackers. US federals has raised the issue of financial crimes committed in the form of financial data theft in cyberspace by foreign actors. With this US government has also started outsourcing their IT business to China and other subcontinent countries to overcome the problem.

We have vulnerable security at nuclear plants, electric smart-grids, gas pipelines, traffic management systems, prison systems, and water distribution facilities, TV Broadcasting center requires procurement. The motivation behind cyber attacks is like intellectual property theft, Service disruption, financial gain, Equipment damage, Critical infrastructure control and sabotage, Political reasons, Personal Management.

The cyber-attackers are categorized mainly in two groups. Lone wolf or solo hacker and Well-Organized groups. Sometimes both can provide potentially equal threat.

There is one Hollywood Movie name “Die Hard 4” which shows How hero saves his country from “Fire Sale” with the help of hacker.

6.       Type of Cyber attacks

There are no of types of cyber-attacks like,

·         Malware:

With this attack the system becomes sluggish, slow, disruption of service, application and service use restrictions etc. By Denial of service (DoS) and Distributed Denial of Service (DDoS) It can even crash the system.

·         Stealing of Internet service:

There are vendors provides legal program to steal the internet service.

·         Web site and Web Applications:

The attacker can carry-out pivotal attack by bypassing perimeter security. Initially it gathers the information through website and then penetrate in the core system. There are several types of vulnerabilities that allow for different forms of attacks. The most common of these are cross-site scripting (XSS) and SQL injection.

·         Advanced Persistent Threat:

To carry-out this kind of attack we require skilled programmer can program a malware for Persistent attacks until objective is achieved. Here objective site is programmed for attack. Every parameter is considered to make it successful. After the program is installed on the sight it stays a while till the right time to become active.

There are other types of cyber attacks too like, Phishing and social engineering, Stolen devices, Botnets, Malware, Viruses, worms and Trojans etc.

7.       Cost of successful cyber attack

It is often impossible to calculate the precise damage of a cyber-intrusion. The consequences of an attack can be far-reaching and long-term. The damage may often be irreparable; no amount of money can undo what has been done. Some of the effects of a cyber-intrusion include:

  •       Financial loss from service unavailability
  •       Loss of customer/client confidence
  •       Market shift to competitors
  •       Lawsuits and liabilities from those who have had information stolen
  •        Cost of recovery
  •        Cost of security measures to prevent a repeat attack
  •       Cost of staff or consultants to investigate and identify the method of atta
  •       Fines from regulatory bodies
  •          Cost of informing customers of theft
  •         Theft of intellectual property
  •        Loss of human life

In IT industry there are number of companies works cyber security producing security products like CCTV monitoring, Biometric Scanners, Firewall, Antivirus programs, Data Protection & Data Recovery Software etc.

 

8.       Security Implementation

The Cyber Security space can be broken down into three areas, or domains. These are:

1.       Prepare

Preparation includes planning, risk assessment, policy, business continuity planning, countermeasure deployment, training, education and accreditation. These are all essential in optimizing our readiness for cyber-attacks.

2.       Defend

In the context of defending against cyber-attacks, defensive processes include ongoing risk mitigation, service and device hardening, and incident detection.

 

3.       Act

Finally, we should establish procedures and protocols to ensure that in the event of an incident we act appropriately. We avoid the use of the term ‘react’, as it tends to carry a negative connotation of a knee-jerk ‘reaction’ that is ill conceived and inflammatory. Actions in response to a cyber-attack should be carefully planned to facilitate the effective response that minimizes expense and collateral damage. The word act is hence deliberate and suggests that organizations should be proactive rather than reactive.

The continual application of these three domains cannot be emphasized enough. External consultants who are experienced, certified security professionals can be invaluable resources in maintaining an effective cyber-security posture and ensuring our businesses remain unhindered by an attack they were unprepared to handle.

These domains should not be seen as sequential steps in which each is terminated prior to the commencement of the next, but rather three continual processes that form the foundation of organizational security.

 

 

Saturday, December 17, 2022

Cloud Computing


 CLOUD COMPUTING 


The need to achieve excellent Quality of Service (QoS) to facilitate effective Quality of Experience (QoE) is one of the notable factors that has brought about substantial evolution in the computing paradigms. For instance, the cloud computing paradigm has been presented to ensure an effective development and delivery of various innovative Internet services. Also, the unprecedented development of various applications and growing smart mobile devices for supporting Internet-of- Things (IoT) have presented significant constraints regarding latency, bandwidth, and connectivity on the centralized-based paradigm of cloud computing. To address the limitations, research interests have been shifting toward decentralized paradigms. A good instance of a decentralized paradigm is edge computing. Conceptually, edge computing focuses on rendering several services at the network edge to alleviate the associated limitations of cloud computing. Also, a number of such edge computing implementations such as cloudlet computing (CC), mobile cloud computing (MCC), and mobile edge computing  (MEC) have been presented. Besides, another edge computing evolution is fog computing. It offers an efficient architecture that mainly focuses on both horizontal and vertical resource  distribution in the Cloud-to-Things continuum. Cloud and fog are complementing computing schemes. They establish a service continuum between the endpoints and the cloud. In this regard, they offer services that are jointly advantageous and symbiotic to ensure effective and ubiquitous control, communication, computing, and storage, along the established continuum in this light, it goes beyond mere cloud extension but serves as a merging platform for both cloud and IoT to facilitate and ensure effective interaction in the system.

Nevertheless, these paradigms demand further research efforts due to the required resource management that is demanding and the massive traffic to be supported by the network.

In addition, there have been significant research efforts toward the sixth generation (6G) networks. Also, it is envisaged that various technologies such as device-to-device communications, Big Data, cloud computing, edge caching, edge computing, and IoT will be well-supported by the 6G mobile networks. 6G is envisioned to be based on major innovative technologies such as super IoT, mobile ultra-broadband, and artificial intelligence (AI). Besides, it is envisaged that terahertz (THz) communications should be a viable solution for supporting mobile ultra-broadband. Also, super IoT can be achieved with symbiotic radio and satellite-assisted communications. Besides, machine learning (ML) methods are expected to be promising solutions for AI networks. Based on the innovative technologies, beyond 5G network is envisaged to offer a considerable improvement on the 5G by employing AI to automate and optimize the system operation. cloud computing presents an enabling platform that offers ubiquitous and on-demand network access to a shared pool of computing resources such as storages, servers, networks, applications, and services. These interconnected resource pools can be conveniently configured and provisioned with minimal interaction. Besides cost-effectiveness regarding support for pay-per-use policy and expenditure savings, some of the key inducements for the adoption of the cloud computing paradigm are easy and ubiquitous access to applications and data.

Latency:

One of the main challenges of the IoT is the associated stringent latency requirements. End-to-end latencies below a few tens of milliseconds are required by some   time-sensitive (high- reliability and low-latency) IoT applications like drone flight control applications, vehicle-to-roadside communications, gaming applications, virtual reality applications, and vehicle-to-vehicle communications, and other real-time applications.

 Bandwidth:

The unprecedented increase in the number of connected IoT devices results in the generation of / huge data traffic. The created traffic can range from tens of megabytes to a gigabyte of data per second. For instance, about one petabyte is been trafficked by Google per month while AT&T’s network consumes about 200 petabytes in 2010. Besides, it is estimated that the U.S. smart grid will generate about 1000 petabytes per year. Consequently, for effective support of this traffic, relatively huge network bandwidth is demanded. Moreover, there are some data privacy concerns and regulations that prohibit excessive data transmission.

Resource constrained devices:

 The IoT system comprises billions of objects and devices that have limited resources mainly regarding storage (memory), power, and computing capacity. Based on these limitations, it is challenging for constrained devices to simultaneously execute the entire desired functionality.

Besides, it will be impractical to depend exclusively on their relatively limited resources to accomplish their entire computing demands. It will also be cost-prohibitive and unrealistic for the devices to interact directly with the cloud, owing to the associated complex protocols and resource-intensive processing. 

Security and privacy:

The present Internet cybersecurity schemes are mainly designed for securing consumer electronics, data centers, and enterprise networks. The solutions target perimeter-based protection provisioning using firewalls, Intrusion Detection Systems (IDSs), and Intrusion Prevention Systems (IPSs). Besides, based on the associated advantages, certain resource-intensive security functions have been shifted to the cloud. In this regard, they are focusing on perimeter-based protection by requesting authentication and authorization through the clouds. However, the security paradigm is insufficient for IoT-based security challenges.

Cloud computing is a technology paradigm that is offering useful services to consumers. Cloud Computing has the long-term potential to change the way information technology is provided and used. The entire cloud ecosystem consists of majorly four different entities which plays vital role to fulfill the requirements of all the stake holders. The role played by each individual depends on their position in the market and their business strategy. These most prominent entities in the cloud ecosystem are:

Cloud Service Provider: it provides cloud services available to cater the needs of different users from different domain by acquiring and managing the computing resources both hardware and software and arranging for networked access to the cloud customers.

Cloud Integrator: the facilitators, one who identify, customize and integrate the cloud services as per the requirement and in accordance with the customers’ needs. It plays the important role of matchmaking and negotiating the relationship between the consumer and producer of the services.

 Cloud Carrier: it is an intermediary which facilitates the connectivity and takes the cloud services at the doorsteps of end-user by providing access through different network access and devices. Cloud Customer: the actual user of services extended by the service provider which may be an individuals or organizations which in turn may have their own end-users like employees or other customers.

  Types of service models:

 Cloud service providers harness the benefit of huge computing resources span over large geographical area to provide seamless, efficient and reliable services to customers at marginal price. The computing resource deployed over the Internet comprises hardware and application software and OS used in virtualization, storage and compute purposes. There are basically three different service models of offering high-volume low-cost services to the end user:

 Software as a Service (SaaS)

 In this model, various applications are hosted by a cloud service provider and publicized to the customers over internet, wherein end user can access the software using thin client through web browsers. Here all the software and relevant data are hosted centrally on the cloud server. CRM, Office Suite, Email, Games, Contact Data Management, Financial Accounting, Text Processing etc. are typically falls under this category.

Platform as a Service (PaaS)

A PaaS is typically is a programming platform for developers. This platform facilitates the ecosystem for the programmers/developers to create, test, run and manage the applications. It thus provides the access to the runtime environment for application development and deployment tools. Here developer does not have any access to underlying layers of OS and Hardware, but simply can run and deploy their own applications. Microsoft Azure, Salesforce and Google App Engine are some of the typical examples of PaaS.

Infrastructure as a Service (IaaS)

 IaaS facilitates availability of the IT resources such as server, processing power, data storage and networks as an on-demand service. Here user of this service can dynamically choose a CPU, memory storage configuration according to needs. A cloud user buys these virtualized and standardized services as and when required. For example, a cloud customer can rent server time, working memory and data storage and had an operating system run on top with applications of their own choice.

 Types of deployments

 Furthermore, these services can be deployed into Public Clouds, Private Clouds or Hybrid Clouds; each has its own advantages and disadvantages.

 Public cloud

In the Public Cloud delivery mode, all the physical infrastructure are owned by the provider of the services which were provided off-site over the Internet hosted at cloud vendor’s premises. Here the customer has no control and limited visibility over where the service is hosted as all these massive hardware installations are distributed throughout the country or across the globe seamlessly. This massive size enables economies of scale that permit maximum scalability to meet varying requirements of different customers and thus provides greatest level of efficiency, maximum reliability through shared resources but with rider cost of added vulnerability.

 Private cloud

 In case of Private Cloud mode, entire infrastructure is owned, managed and operated exclusively by the organization or by a third-party vendor or both together and is hosted on the organization premise using virtualization layer. It also facilitates flexibility, scalability, provisioning, automation and monitoring and thus offers the greatest level of control, configurability support, high availability or fault tolerant solutions and advanced security which is missing in public cloud. Basically, very concept of private clouds is driven by concerns around security and keeping assets within the firewall which results it to significantly more expensive with typically modest economies of scale.

 Hybrid cloud

As name suggest, Hybrid Cloud includes a variety of product mix from both Public and Private Cloud options sourced from multiple providers at added cost to keep track of multiple different security platforms by ensuring all aspects of business to communicate with each other seamlessly. In case of Hybrid approach, operational flexibility, scalability, efficiency and security is properly balanced by hosting mission critical applications and sensitive data protected on the Private Cloud and generic application development, big data operations on non- sensitive data and testing on the Public Cloud. Hybrid Cloud thus leverage benefits of both Public and Private Cloud by maintain balance between the efficiency, cost saving, security, privacy, and control.

 Aspects of cloud security


 A. Cloud Security Simplified:

 ·         Access Control

 ·         System Protection

 ·         Personal Security

 ·         Information Integrity

 ·         Cloud Security Management

 ·         Network Protection

 ·         Identity Management

 

B. Vulnerabilities and threats:

 ·         Data Breaches/Data Loss

 ·         Denial of Service Attacks/Malware Injection

 ·         Hijacking Account

 ·         Inadequate Change Control and Misconfiguration

 ·         Insecure Interfaces and Poor APIs implementation

 ·         Insider Threats

 ·         Insufficient Credentials and Identity/Compromised accounts

 ·         Weak control plane/Insufficient Due Diligence

 ·         Shared Vulnerabilities

 ·         Nefarious use or Abuse of Cloud Services

 ·         Lack of cloud security strategy/Regulatory violations

 ·         Limited cloud usage visibility

 

 

 

 

Dark Matter

DARK MATTER: Dark matter hypothesis, we can try to justify using newtons universal law of gravity: There is a stronger gravitational field a...