500-Word Summary of Article About Big Data in Cloud Computing

TO:           Prof.  Ellis

FROM:     Norbert Derylo

DATE:     Oct 6, 2021

SUBJECT:     500-Word Summary of Article About Big Data in Cloud Computing

A major challenge of working with big data is using cloud computing. Cloud computing provides many benefits that can help with big data. The study focuses on the various interactions between cloud computing and big data. 

Big data is data that is difficult to store due to its volume and variety. In the article big data is defined as “a new generation of technologies and architectures, designed to economically extract value from very large volumes of a wide variety of data, by enabling the high velocity capture, discovery, and/or analysis.” Big data can be described with four metrics: volume, variety, velocity, value. Big data can also be classified by five aspects: data sources, content format, data stores, data staging, data processing.  Each classification of data has its own characteristics and complexities. 

Cloud computing is the next generation of computing in professional settings. Cloud computing has many advantages over current computing methods. The increased popularity of wireless devices has allowed cloud computing to become extremely useful. Cloud computing and big data work hand-in-hand, cloud computing being the base on which big data thrives and expands. Due to the variety of big data, different forms of cloud computing will not work for all forms of big data. 

The two sets of data on the relationship between big data and cloud computing come from scholarly sources and the different vendors of cloud computing. Case studies from different vendors demonstrate the extensive variety of research that uses cloud computing. Swift Key used cloud computing to scale its services for demand. 343 Industries used cloud computing to make their game more enjoyable. redBus used cloud computing to improve customer service for online bus ticketing. Nokia used cloud computing to process petabytes of data from their phone network. Alacer used cloud computing to improve response times to system outages. Scholar studies also used cloud computing for big data projects. Scholars studying DNA have used cloud computing to dramatically increase the speed at which they analyze DNA sequences. A case study showed that cloud computing can acquire and analyze extremely large data sources, for example data from social media. A study of microscopic images used cloud computing to submit data processing jobs to the cloud. A study used cloud computing to show that you can use cloud computing as a backup to massive failures of other computing services. 

Data integrity is a data security concern and can be a concern in cloud computing as well. Transforming big data for analysis is a challenge and one of the reasons big data is not as popular as it should be. Data quality can be quite variable and cause concerns with big data. Big data allows for various different sources of data which might not follow the same structure. Privacy is also one of the biggest concerns of cloud storage. Encryption is the most popular way to keep cloud data safe. Data that is encrypted is not scalable and the computation time used for it is not practical for big data. Another possible solution is using algorithms to determine how to give out data to prevent leaks. Although studies have addressed multiple issues with cloud computing, there are few tools that can patch up the problems. Data staging is an issue involving the various different formats big data collects. There have been solutions to improve distributed storage systems, however they dont fix all the problems with optimization and accessibility. The current algorithms from data analysis are too unoptimized for the scalability of big data. Data security still and always will be an ongoing problem in cloud computing and big data. 
[1]Hashem, I.  A.  T. , Yaqoob, I. , Anuar, N.  B. , Mokhtar, S. , Gani, A. , & Ullah Khan, S.  “The rise of “big data” on cloud computing: Review and open research issues.  Information Systems” Information Systems.  2015 Vol.  47, p98–115.  https://doi. org/10. 1016/j. is. 2014. 07. 006

500-Word Summary of “A computer program for simulating time travel and a possible ‘solution’ for the grandfather paradox”

TO:                  Professor Ellis
FROM:            Sebastian Vela
DATE:             October 8, 2021
SUBJECT:       500-Word Summary of “A computer program for simulating time travel and a possible ‘solution’ for the grandfather paradox”

The following is a 500-word summary of a master’s thesis about a program that simulates the grandfather paradox. The paradox explains that if a time traveler had gone back in time to kill his grandfather, then he would not have been born to travel back in time, thus allowing the grandfather to live and for the time traveler to be born. The program written by Doron Friedman uses automated reasoning to “modify” history and explore the consequences and propose possible solutions.

The actions and constraints in the simulation are labeled with the values “True”, “False”, or “Unknown”. For every person in the simulation, these values indicate whether the person is alive or not. The timeline of events can be changed by adding or removing actions in the simulation. A strong change is made when something that happened is changed, rather than adding actions. With each new event, the previous timeline is not erased. Instead, the program knows the consequences of the first timeline, and then sees the consequences of the second timeline.

A few hundred facts and constraints are included, with the paradox “solution” containing 229 facts and 344 constraints. Only 3 actions exist in each simulation: begetting, killing, and traveling in time (which is split into “depart” and “arrive”, with preconditions existing that for A to kill B, both must initially be alive, and B not being alive as the post-condition. The constraints “remains” and “appears” are introduced for continuity-of-existence.

The simulation is taken from the point of view of the time traveler, defined as S, and his actions toward his parent, F. The first action is the that F has a child, S. The system is aware of F needing to exist to beget S. Then, S travels back in time to kill F, creating the paradox and the system reports a logical contradiction. The proposed solution to repair this paradox is that S has a clone S1. They are two separate entities, so the contradiction is resolved, but they retain the same identity. Thus, S goes back in time and creates a clone that kills F. This also reports a contradiction: “The user (clone S1) kills his parent F; this introduces the parent paradox and, indeed, our reasoning engine reports a contradiction: F is supposed to be dead because he was killed, but he is also supposed to be alive to beget S” [1, p. 10].

The solution that follows is not as elegant as the researchers would like. “F gives birth to S and goes back in time. S, right after being born, goes back in time as well, and kills his parent F. Right after killing his parent, though, S (actually the clone, S1) gives birth to him. In this solution, therefore, S is F’s son, F’s father, and F’s killer” [2, p. 12]. A much more simplified solution to the paradox is given by having the system not assume any actions by the time traveler. “F travels to the future, begets S, and travels back just in time to be killed by a clone of S who also went back in time” [3, p. 12].

References

[1]     D. Friedman, “A computer program for simulating time travel and a possible ‘solution’ for the grandfather paradox,” arxiv.org. [Online]. Available: https://arxiv.org/ftp/arxiv/papers/1609/1609.08470.pdf

500-Word Summary of Article About Operating System

TO:              Prof. Ellis
FROM:         Rameen Khan
DATE:          Oct, 06, 2021
SUBJECT:    500-Word Summary of Article About Operating System

The following is a 500-word summary of peer-reviewed article about Hardware/Software Partitioning of Operating Systems. The authors discuss an operating system is the primary software that manages all the hardware and other software on a computer. The operating system, also known as an “OS,” interfaces with the computer’s hardware and provides services that applications can use. According to Vincent J. “An Operating System (OS) implements in software basic system functions such as task/process management and I/O” [1, p. 1].

According to this article “RTOS/SoC codesign where both the multiprocessor SoC architecture and custom RTOS (with part potentially in hardware are designed together” [ 1, p. 1]. In another word, A SYSTEM-ON-A-CHIP (SoC) architecture with reconfigurable logic and multiple processing elements sharing a common memory. An RTOS (Real Time Operating System) performs these functions while also being uniquely designed to run programs with high accuracy timing and a high degree of consistency.

In this article, in Figure 2 “A Graphical User Interface (GUI) allows the user to select desired RTOS features most suitable for the user’s needs [ 1, p. 1]. By summarizing this quote, GUI is a form that interacts between humans and computers occur; it consists of information output from the machine, as well as a set of control elements for the user to perform certain actions.

In this Article “The Hardware/Software RTOS generation framework takes as input the following four items: Hardware RTOS Library, Base System Library, Software RTOS Library and User input” [1, P. 1,2]. By elaborating this quote, the SoCLC, SoCDDU, and SoCDMMU are among the RTOS hardware IP accessible in the framework. In a shared memory multiprocessor SoC, the SoCLC stores lock variables in a separate lock cache outside the memory system, decreasing lock latency, lock delay, and bandwidth usage.

According to the author of this Article “Hardware RTOS components have well-defined interfaces to which any PE (Process Element) can connect and thus use the hardware RTOS component features” [ 1, p.2]. By Summarizing this quote, SoC target operating model that address people, Processes and technology, along with business-aligned goals and applicable metrics.

 Conclusion: The entire sequence of events that occur for hardware and software interaction is under the control of OS. All the driver software helps the OS to communicate with the hardware, to execute the application software. The purpose of an operating system is to provide an environment in which a user can execute programs in a convenient and efficient manner. The hardware/software partitioning process is a critical component of the codesign technique. It is involved with determining whose functions will be implemented in hardware components and which will be implemented in software components. Applications invoke these routines through the use of specific system calls. This underlying structure and its design are called the system architecture

Reference [1]      V. J. Mooney, “Hardware/software partitioning of operating systems [SoC applications],” 2003 Design, Automation and Test in Europe Conference and Exhibition, 2003, pp. 338-339, doi: 10.1109/DATE.2003.1253630.

500-Word Summary of Article About Animation Design

TO:                        Prof. Ellis
FROM:                  Ulises Mora
DATE:                   Oct 6, 2021.
SUBJECT:             500-Word Summary of Article About Animation Design

The following is a 500-word summary of a peer-reviewed article about a study of different uses, design, and techniques in 2D and 3D animation. The authors of the article explain how these animations differ between them, and the concepts in which they are related.  According to Repenning et al., “Here, we discuss our experiences with the differences between 2D and 3D as they relate to three concepts connecting computer graphics to computer science education: ownership, spatial thinking, and syntonicity.” [1]

The opportunity to program in 3D environments through computer applications, allows any person, have an additional opportunity to learn, engage and see programming as something more dynamic and effective. Often, people who have more interaction with animations, are those who end up interested in programming; but when they start to study it and go in depth, they realize that programming does not meet their expectations. Because of this, the authors decided to start a new study about animations.

There is a bond between 2D and 3D animation. Regarding ownership, also known as motivation; if there is no motivation or interest, students simply will not succeed on the field. While using a 2D-drawing-editor, students were interested in programming regardless of the basic animation; since it does not exceed the resolution in 3D, students were satisfied. However, using a 3D-editor is much more complex and contains a professional interface which makes it difficult to start using it.

The authors implement a model to create a 2D-drawing editor and then inflate the drawing to convert it to 3D. Many students found motivation in creating 3D-models that were not systematic, but they were mostly basic in shapes. Instructors were unhappy about this program because students spent a lot of time creating these inflated models. Nevertheless, inflating 2D-images is not enough; students must do more to understand the three dimensions through spatial thinking. Occasionally, 3D-design can be easier to work than 2D-design due codes errors that go unnoticed. But in 2D, these errors cost a lot since they might end with lag and low performance.

Since the formulation of the codes would be complicated; through syntonicity, for people to be able to program, they must conduct themselves onto the object. The authors implemented a program in which the syntonicity was used so that the children understood in first-person-mode what the program would feel and give directions, and it would make it easier to interpret.

Beginning programmers loved this program first-person-mode program, but advanced ones disliked it because of the amount of unnecessary command voices of directions. Then, the authors decided to implement a syntonicity in their own program so that all programmers were comfortable to use it. A high resolution for the development of 3D-animation is important for the motivation of computer students. Additionally, it is also a teaching implementation that is dynamic and does not bore the student.

Reference

[1]          A. Repenning et al., “Beyond Minecraft: Facilitating Computational Thinking through Modeling and Programming in 3D,” in IEEE Computer Graphics and Applications, vol. 34, no. 3, pp. 68-71, May-June 2014, doi: 10.1109/MCG.2014.46.

Subject:           500-Word Summary of Article About Object-oriented Programming (OOP)

To: Prof. Ellis
From: Hamzat Olowu
Date: Oct. 6, 2021
Subject: 500-Word Summary of Article About Object-oriented Programming (OOP)

The following is a 500-word summary of a peer-reviewed article about object-oriented programming (OOP) and its role in computing. The authors discussed the traditional procedural approach and how OOP made it obsolete. According to Hinsch et al., “Applications today do not offer the consistency and flexibility needed to make the computing environment more productive for end-users.” [1, p. 1]

The complexity of programming projects led to the popularity of the Object-oriented programming (OOP) approach, this is because the approach breaks down a complex problem into smaller and simpler chunks of problems that are isolated and then solved. This approach to problem solving led to OOP being much easier to read and understand. OOP was such a unique and refreshing concept that it quickly rose to fame after it was devised, the invention of C++ and Object Pascal made the concept break into the technical community and more articles have been written that surrounds the OOP approach.

Applications without OOP concepts are not as consistent and as flexible as the ones made with OOP concept, this is making programming more difficult because technologies keep expanding and the traditional approach to problem solving and programming can’t keep up. There are many benefits that OOP has to offer, not only for programmers but for the users as well, these benefits include manipulating objects easily on a desktop, system-level services that does not need the user’s knowledge to run, and control of apps with scripts. For developers, using OOP will allow them to improve productivity and development of applications that needs a graphical interface will become easier.

There are three different groups that will be affected in different ways by the advent of object-oriented programming. The first is the power user, although a power user will not recognize the programming concepts, they will still realize that problems are easier to solve, and their jobs is easier. The second is the general business programmer, who will have a better time when developing graphical based application and database applications, because they will be developing these applications with ease unlike before. Systems-level developer are the ones that fully understands the full capabilities of OOP, using the OOP and its tool they will create the most innovative and complex applications, through which they will contribute the most to the expansion of the computing industry.

Before OOP approach was the traditional procedural approach which are too linear in nature to be used when there is interaction with the user or other programs. Traditional programming can get repetitive when trying to accomplish the same task but on different objects like delete character, delete word, and delete paragraph. This was one of the disadvantages the OOP solved. The primary approach to OOP is to enclose data into what is known as classes, which are a group of objects sharing the same methods and information. Classes can have subclasses that inherits data and methods from the parent classes, this concept is called Inheritance. This makes it possible to reuse software codes which simplifies integration of other applications which also increases productivity while decreasing programming time.

Reference: [1]       Hinsch, Kathryn, et al, (1990). “Object-oriented programming: its role in computing,” Library Software Review, vol. 9, no. 1, pp. 18+, Jan.-Feb. 1990. Gale Academic OneFile.

500-word summary about the SYVR system.

To: Professor Ellis
From: Jiaqi Huang
Date: Oct 6, 2021
Subject: 500-word summary about the SYVR system. 

The following is a 500-word summary of an article about the possibility of SYVR system. The SYVR virtual assembly 5g integrated system has multiple functions, it can construct stable virtual scenarios, capturing multidimensional industrial data, simulate virtual assembly process, and presenting the data visualization result based on its physical simulation which can save time and cost, create more value, improve the quality of product and more efficiency for industrial areas. With the 5G high-speed networks, mainstream VR graphic engines are mainly focused on games, movies, and education areas, the results achieved in industry areas are slightly less and can not meet demand, and are not being diverse enough. 

The performance of virtual technology with 5g high-speed networks in virtual manufacture industries are not mature enough and there are many important problems that need to be solved, like the transfer of the real-time data, the generation and visual display under high-speed networks, the build models in a virtual sense, and the implementation of high-level physical simulation techniques, etc.

Based on the article, the system can build virtual environments by using realistic physics simulation and real-time rendering techniques for various industries. We also can adjust data in real-time, present complex scenes, and build the interactive system under the 5g high-speed network environment. And the industrial virtual manufacturing system can import models, process data, and provide visual presentations to users. 

The SYVR engine saves a lot of time and cost and brings more value to industrial production and the equipment manufacturing fields by providing virtual test and design simulation because it can construct virtual experiment scene based multi-dimensional industrial data input and processing, also can show user effect in visual and create interactive simulation system but all these functions need to under the 5g high-speed network and the virtual environment. 

The SYVR system supported real-time operation guidance, daily inspection, dynamic 3D production display, and employee training which means users can get enough training in the simulation of the realistic industrial production line so as to reduce the number of accidents and improve the quality of products with higher efficiency. The system can provide a high quality of industrial VR/AR while keeping the high bandwidth and low delay based on its strong calculation and real-time simulation, it also can build the industrial digital application system by using its data simulation platform to help 5g high-speed networks make contributions in different areas. Thanks to the development of the 5g high-speed network, it can help the SYVR system make advances on its high-performance industrial real-time simulation graphics rendering engine too so as to have a higher level of simulation effects. 

The SYVR project can improve the safety and correct percentage of key operation by using its functions like virtual operation guidelines, daily inspections, and real-time data visualization effects and the need of this system also can speed up the development of 5g virtual technology which expands its application in industrial, medical areas. “Large-scale applications, upgrade the existing VR experience level, provide more possibilities for industrial, medical and other scenarios with high requirements on low latency, and explore more commercial 5G + VR intelligent manufacturing solutions.”[1,p2]

I believe this system can provide people with a work environment with more safety and efficiency.

Reference[1]Y. Liu, Y. Tang, J. Zhao, O. Sun, M. Lv and L. Yang, “5G+VR industrial technology application,” 2020 International Conference on Virtual Reality and Visualization (ICVRV), 2020, pp. 336-337, doi: 10.1109/ICVRV51359.2020.00090.

500- Word Summary of Article About Improving Safety and Performance Testing for EV Batteries

To: Professor Ellis
From: Mohammed Sakib Islam
Date: 10/06/2021
Subject: 500- Word Summary of Article About Improving Safety and Performance Testing for EV Batteries

Below is a 500-word summary of a peer-reviewed article about improving safety and performance testing for EV batteries. While creating an electric vehicle that is functional, safe, and durable, battery technology can be considered as the first important thing to keep in mind. This technology will be required in the coming period to help Europe transition to a low-carbon, resource-efficient economy. To ensure the protection and sustainability of European customers, the Government and The Industry should compose a list of procedures for inspections so that any oversight can be rectified. Testing a certain battery pack before the initiation into the market and making sure it needs no alteration which could have an impact on customers is a crucial step to follow. Tests such as Performance, endurance, reliance, and abuse tests are done on the batteries to ensure if a battery pack can be hazardous to customers and are done in controlled environments so that further improvements can be done before deployment into the market. According to the authors, “Given that EVs have been prioritized as a ‘green’ solution to decarbonize Europe’s transport sector, it is also important for pre commercialisation tests to measure their environmental impact over their entire lifecycle” [1, p. 2].

Test procedures established to a certain standard at the international level may be subject to evaluation by the International Electrotechnical Commission (IEC) and the International Organization for Standardization (ISO). Some examples of IEC and ISO standards are given to give a brief specification of the need for battery testing such as IEC 626601, 2; IEC 61982; IEC 624853; ISO 64691, 2, 3; ISO 124051, 2, 3; Both EUROBAT and JRCIET share the same belief that if all batteries are manufactured to a certain set of global standards before they are assembled into an EV for safety. To raise awareness at the international and national level, EUROBAT has collaborated with EU institutes, and they even participate in official meetings between European and Chinese regulators to raise awareness. their security awareness. EUROBAT and JRCIET have concluded a memorandum of understanding demonstrating their cooperation in the study of battery performance tests and evaluations as well as regulations for clean and efficient batteries and safe electricity storage particularly for transport. Cooperation from mutual partners offers a favorable opportunity to resolve issues during battery testing to ensure that issues can be resolved during BESTEST battery testing activities. The BESTEST activity aims to strengthen the relationship between JRCIET and relevant European industries and their representative associations. EUROBAT and JRCIET understand that such cooperation can pave the way for a better future for Europe’s transition to a low-carbon economy as well as a much better relationship between the United States and Europe regarding the safety of batteries in electric vehicles prior to their market deployment.

Reference:

[1]       A. Westgeest and L. Brett, “Improving safety and performance testing for EV batteries,” 2013 World Electric Vehicle Symposium and Exhibition (EVS27), 2013, pp. 1-4, doi:10.1109/EVS.2013.6914940

500-Word Summary of Article About 5G technology

TO:                         Prof. Ellis
FROM:                  Shuaixiang Feng
DATA:                   Oct 5, 2021
SUBJECT:             500-Word Summary of Article About 5G technology

The following is a 500-word summary of a peer-reviewed article about D2D communication in 5G. Following the increased number of devices, we require a faster, extensive, and reliable network. 5G network can satisfy the demand. D2D communication will be an excellent solution for providing a better network service. D2D communication is used for the “physical proximity” of communication devices. In D2D communication, devices will skip the indirect connection between the device to the Base Station and directly connect with another device. According to the authors, “D2D functionality supports high baud rates and reduces latency between devices, making it particularly important in meeting the standards set for 5G network deployment.” [1, p.987]

The authors list four possibilities to explain the difference between traditional wireless communication and D2D communication in a two-layer 5G cellular network. The main feature of D2D communication makes it different from traditional communication: avoid the Base Station or Access Point if two devices are close enough. The devices must establish a connection with Base Station at first to make a connection with others. The second part is devices also controlled by Base Station, but they directly connect because of D2D communication. The third part of the image means devices require much more relays to transfer source to destination if the connection without inclusion the Base Station. The Fourth image shows two devices still direct connecting without Base Station.

D2D communication has more advantages besides the higher security and reduction of costs. It has better coverage, great data transmission speed, reliable communication, using the same radio frequency to transfer more data than others, higher energy efficiency, heterogenous linkage, and low latency. Also, D2D networks can be used in M2M communication in IoT for responses in real-time and support low latency. Also, D2D communication can contribute to IoT in smart cities as its efficient usage of the radio source and small energy cost. Further, between the vehicle-to-vehicle data transmission of data sharing, D2D communication can be used for the V2V communication.

On the essential functions of D2D application, users may find each other for using a social application and transmission data or services on HD video recording device set high requirements for the base network and the spectrum resources. D2D enables in reducing the transmission pressure of the network.

D2D is necessary to enhance the regular security system prepared for every threat. Currently, D2D security is created by many cryptographic algorithms such as Diffie-Hellman and River Shamir Adleman. The paper’s authors identify the potential security vulnerabilities and classify the attacks, such as Impersonation and IP Spoofing, which may create a severe security threat.

D2D communication is a new wireless technology that can reduce the network’s load and become a new social networking tool. In addition, D2D is one of the critical technologies in 5G network, which enables it to support low latency, fast transmission speed, wide-coverage, and low energy consumption.

Reference:

[1] Čaušević, Samir, Adisa Medić, and Nedžad Branković. “D2D Technology Implementation in 5G Network and the Security Aspect: A Review.” 2021-05-27, TEM Journal 10.2 (2021): p.987–995. Web.

500- Word Summary of Article About Malware Detection In Self Driving Cars

TO:       Prof. Ellis
FROM:     DeAndre Badresingh
DATE:     Oct. 6, 2021
SUBJECT:  500-Word Summary of Article About Malware Detection In Self-Driving Cars

    Since transportation becomes more intelligent, it leaves it more vulnerable to cyber-attacks. There has been many times where users have lost control of their vehicle due to someone attacking their system. This is typically caused by various forms of malicious software. Malicious software replicates an already authorized software for self-driving vehicles. Methods and experiments have been put In place to analyze detection of compromised self- driving cars. Vehicle to vehicle protection is important because it allows external connections to not only provide comfort for the driver but also update the security. Security technology is analyzed and scans the security of the car for intrusion detection. 

The main method for hackers getting access to information is through the use of malicious code which allows them to gain or deny access to a user’s system. To combat this, Machine learning algorithm using the software called Adware and General Malware (AW&GM) is used to differentiate normal code from malicious code used by hackers. 

Attempted breaches come in many forms which includes malicious messages, denial of service, or even adware. A method which involves reconfiguring electronic control units uses a control module known as mitigation manager that scans for cyber-attacks. Another method for controlling these types of attacks involves an algorithm that scans for unusual patterns in within the vehicles network. Furthermore, another concept in mind was the use of cloud defense framework which allows only one gateway to monitor all traffic going into and out of the network. 

Since self-driving vehicles are usually connected to public networks security is key to protecting them due to higher chances of having the operating system compromised. On a most recent machine learning algorithm study, intrusion detection was installed into the vehicles which allows the unit to actively see real-time changes in behavioral rhythm. With this new software, the algorithm can determine the intrusion more accurately by learning, verifying, and evaluating messaging patterns.

In the event of unusually high network traffic, intrusion detection relies on scaling. Scaling prevents under or overflow of data when undergoing experiments. When the environment is right, the multiple rounds of test begins. More tests are required during the experimental phase because they may come back as false positives.  

To conclude, IDS go through the three phases of data preprocessing, modeling, and detecting. Simulated results are compared to proposed algorithms. Benign code, adware, and general malware are known as classification scenarios.

Using random forest, also known as RF, has been proven to have a higher predication accuracy. It has been concluded that using an algorithm with short learning time can use used to prove the mode accurate results. Receiver operating characterizes are also used to calculate the results from the tests. Each use of these methods revealed to have a different success result. Since transportation is ever-changing, security has to keep up to protect users.

Reference:

[1] S. Park and J.-Y. Choi, “Malware detection in self-driving vehicles using machine learning algorithms,” Journal of Advanced Transportation, vol. 2020, pp. 1–9, 2020. 

500-Word Summary of Article About Software Protection

To: Professor Ellis
From: Roshel Babayev
Date: 10/5/2021
Subject: 500-Word Summary of Article About Software Protection

Computer systems have many vulnerable points with the most vulnerable aspect being the system administrators. A very common attack is known as a man-at-the-end attack (MATE) is performed via tampering based on information obtained by reverse engineering (which is highly illegal). To stop these types of attacks from occurring, we try our best to ensure all items are in proper order by verifying their signature. We implement obfuscation to prevent (or at least slow down) the reverse engineering process and to preserve integrity of the software. For a MATE attack to process, the malicious user must get their hands on the software and is required to reverse engineer it but with software protection being implemented, it makes their task much harder. Utilizing a MATE attack could be just something to assist you from paying your bills to something catastrophic especially when is it utilized as a terroristic attack. 

Today, the video gaming market is one of the most significant aspects of the US economy but with cheaters producing their own virtual in-game item (which has value in the real world), they essentially devalue the economy. The major issue with these attacks is that all our information is stored digitally including military secrets and if someone could get their hands on this information especially if it is an outside party, could cause severe damage to us. Software protection isn’t a full-proof way to stop these sorts of attacks, it only delays the inevitable. There are four basic categories in which software protection falls under: code obfuscation, tamper-proofing, watermarking and birthmarking. Code obfuscation makes it much harder to reverse-engineer software. Tamper-proofing has the basic purpose of ensuring the file has not been modified in any such way via implemented checks. Watermarking allows for a fingerprint on the software indicating who is the owner of said reverse-engineered software and is often combined with tamper-proofing. 

In a sequence of articles, Mariano Ceccato and Paolo Tonella wrote an article detailing a concept which allow the client to have a stub which when ran would stream the real code from the server to the client but each time it would be streamed, it would be mutated so that you could never pull the original code out. Following that article’s release, another article was developed showcasing the new Trusted Platform Module chips which are found regularly on computers these days and allow for more effective use of tamper-resistance. As a method to avoid others from stealing proprietary code, open-source development allowed for a license in-place to prevent others from stealing your code. Since code didn’t have a proper means to be copyrighted, a new license was introduced which was a service license based on ODRL-S. While software protection is a must these days, the major downside is performance taking a hit when using many security methods. 

Reference:
[1] Falcarin, Paolo et al. “Software Protection.” IEEE software 28.2 (2011): 24–27.