Weekly Writing Assignment, Week 5

This Weekly Writing Assignment is meant to help you vet or evaluate where some of your research comes from and report back what you find. Watch this week’s lecture before performing this assignment so that you learn more about the process that I suggest for discovering the information requested below.

For this assignment, refer to two journal articles from different journals that you came across in your research (or, search for your Expanded Definition term again in IEEExplore and/or Academic Search Complete to find two examples for this assignment).

Using the built-in tools in the databases where you found the article and search sites like Google, DuckDuckgo, or Bing, learn more about the specialization of the journal and the kinds of research that it publishes and find out the name of the editor-in-chief and their professional background (degrees, affiliation, and research specializations).

Then, type a short paragraph in your word process of choice that identifies the name of the two journals that you investigated for this assignment and describe in your own words what each journal specializes in. And, identify each journal’s editor-in-chief and describe their professional details, such as degrees and where they were earned, their affiliation (where they work/teach), and their research specializations (if possible to find).

Finally, copy-and-paste your paragraph into a comment added to this post.

This assignment should not take very long. Focus most of your time this week on completing a draft of your Expanded Definition essay.

Job Search Advice, Week 5

As discussed in this week’s lecture, I built an OpenLab Site called Job Search Advice. It offers help with preparing your resume, cover letter, and other materials for your job search. It includes a video lecture, sample documents, and useful links. It’s meant to be a useful resource for you all. If you know other City Tech students not in our class who might want to check it out, please feel free to share!

Summary of Von Sols and Futcher’s “Adaption of a Secure Software Development Methodology for Secure Engineering Design”

TO: Prof. Ellis 

FROM: Jennifer Martinez

DATE: 03/03/2021

SUBJECT: 500- Word Summary of Article About the Adoption of a Secure Software Development Methodology.

The following is a 500-word summary of a peer-reviewed article about adopting a Secure Software Development Methodology to Secure Engineering Designs. The authors discuss an approach of how to implement security in the Engineering design through the normal Systems Development Life Cycle (SDLC) by first creating a baseline on the students’ knowledge on security and then they designed a guideline to help students implement the Security software development methodology (SecSDM) into their projects. According to the authors, “Traditionally the information technology (IT) professionals were considered…responsible for cybersecurity,…However, as engineering and control systems became more integrated with the IT infrastructure, securing these systems cannot remain the sole responsibility of IT professionals” (Von Solms & Futcher, 2020, p. 125630). Therefore it would be ideal for engineers to learn how to protect their designs. First, the authors created an analysis to determine how much knowledge engineering students had on software security. The Capstone is a final year project used for the analysis and consisted of focusing specifically on hardware, software design, and testing. The results illustrated the dissociation the engineer students had between software and security due to it not being a requirement. A survey was given to the students after the project to determine if they understood the terminology and implementation of security. The survey confirmed that students understood the importance of security but lack the knowledge and training. Following the baseline, the authors design a guideline for the students to secure their projects by integrating security into the system development life cycle (SDLC) through the SecSDM. First, in the exploration phase, the engineer must explore the technology readiness, conduct a risk analysis, and follow the SecSDM suggestion to define the security requirements by the ISO/IEC TR. Based on the pre-evaluation, the engineer must then recommend possible solutions, define the systems requirements and products specification, as well as follow the SecSDM suggestion to identify the security services that satisfied the requirements. The goal of the design and development phase for engineers is not only to design the system architecture and allocate systems requirements to subsystems but to map the security to the specific security mechanisms, as well as follow the SecSDM recommendation to use the “ISO 7498-2 standard’s security mechanisms” (Von Solms & Futcher, 2020, p. 125635). In the production and implementation phase involves the construction of subsystems, systems integration, and testing, as well as the engineer, should use the appropriate security controls based on the SecSDM recommendations. During the utilization and support phase, the engineer is responsible for the product to operate based on the user’s need plus is responsible for the continuous monitoring of the software and firmware to ensure that the product is secure and used correctly. Finally, the SecSDM doesn’t have specific requirements for the retirement phase other than the engineer must teach the user how to dispose of the data and product properly. Although this paper motivated various people to write proposals on the integration of secure software practices into engineering design, there’s still no practical approach on how to do so.

Reference 

Von Solms, S., & Futcher, L.A., (2020). Adaption of a secure software development methodology for secure engineering design. IEEE Access, 8, 125630-125637. https://doi.org/10.1109/ACCESS.2020.3007355

Summary of Kandan et al.’s “Network attacks and prevention techniques – A study”

TO: Prof. Ellis

FROM: Jerry Chen

DATE: 3/3/2021

SUBJECT: 500-Word Summary of Article About Network Attacks and Preventions

The following is a 500-word summary of a peer-reviewed article about the type of attack and its prevention in nowadays’ s network. The authors discuss the types of network attacks that currently exist as most of the people or small businesses still didn’t aware of the importance of configuring their network gears, which open their doors to welcoming the attacks. According to the authors, “Any data passes over large number of workstations and routers which sometimes very weak due to organizational structures and their policies which may lead to damages and attacks” (Kandan et al., 2019, p. 2). For network security, there are two types of network securities, such as hardware security and software security. Hardware security is like the defensive system, which is often used in corporations and software security is application-based, which is only for the individual or small firm used. As mentioned by the authors, “if the system is not implemented the proper security methods and control over their network, then there is a way for attacks from internal or external using these techniques” (Kandan et al., 2019, p. 2).  There are some major types of attacks that attackers used most frequently nowadays, such as browser attacks, man-in-the-middle attacks (MITM), and botnets.

Browser attack is the most frequent web browser base type of attack that the attacker uses to hack into the system by adding malware to the browser. Man in the middle attack (MITM) is another attack that the attacker uses to interrupts the confidential data during the transmission process of two victims and access to the data without the awareness of victims. The botnet is a different type of attack, which is the formation of robot and network, and it is one of the main attacks that attacker uses to gather unauthorize confidential data from the users.

The problems always come with solutions, as well as the attack techniques. According to the authors (Kandan et al., 2019, p. 4), there are some preventions which born to prevent users from getting those attacks, such as the prevention of man in the middle (MITM), HTTPS, and the prevention of botnet. To prevent getting MITM attack, the two endpoints should use the higher secure network when communicating and encrypted the transmission by using any encrypt protocol (Radhakishan et al., 2011). HTTPS is the prevention which is uses to protect users from getting browser attacks by providing a higher secure network over the browser by issuing the certificates to only the participating entities and verified at each party before the transmission. Moreover, to prevent getting botnet kind of attack, the user should make sure the intrusion system is up to date and specifically configure the ports or shut down the ports that not currently in use.

As network security constantly changing every day, attackers are always using their tech knowledge to explore new types of attacks to fulfill their purposes. People or small businesses should always configure their network gears and install security software to monitoring the attacks to decrease the chance of being attack.

Reference

Kandan, A. M., Kathrine, G. J. W., & Melvin, A. R. (2019). Network attacks and prevention techniques – A study. IEEE International Conference on Electronics, Communication and Computing Technologies (ICECCT), pp. 2,4. https://doi.org/ 10.1109/ICECCT.2019.8869077

Radhakishan, V., & Selvakumar, S. (2011). Prevention of man-in-the-middle attacks using id-based signatures. Second International Conference on Networking and Distributed Computing. https://doi.org/ 10.1109/icndc.2011.40

Summary of Feng Shi’s et al. “Review of Artificial Intelligence Techniques in Imaging Data Acquisition Segmentation, and Diagnosis for COVID-19”

TO: Prof. Ellis

FROM: Neil Domingo

DATE: 3/3/2021

SUBJECT: 500-Word Summary of Article About Utilizing Artificial Intelligence In Fighting COVID-19

The following is a 500-word summary of a peer-reviewed article about the use of artificial intelligence in medical imaging during the COVID-19 pandemic. The journal’s goal is to further discuss the use of medical imaging with artificial intelligence in fighting against COVID-19 and discuss machine learning methods in the imaging workflow. Medical imaging such as CT scans and X-rays have been found to play a critical role in restraining the transmission of COVID-19. CT scans is one of the imaging-based diagnoses that is used for COVID-19 and includes three stages: pre-scan acquisition, image acquisition, and disease diagnosis.  Artificial Intelligence contributes to the fight against COVID-19 as it allows for safer, accurate, and efficient imaging solutions. Imaging facilities, and workflows should be considered important to reduce the risks and save lives from COVID-19. According to authors “AI-empowered image acquisition can significantly help automate the scanning procedure and also reshape the workflow with minimal contact to patients, providing the best protection to the imaging technicians” (F. Shi et al., 2020, pg 4). The use of contactless and imaging acquisition is necessary to reduce the risks of technicians and patients being infected as there is contact between them. Artificial intelligence can be used to help the contactless scanning as it will be able to identify the pose and shape of a patient by using data from visual sensors. Scan range, the start and end point of a CT scan, can be estimated by the use of visual sensors with artificial intelligence, and scanning efficiency can be improved.  A mobile CT platform with artificial intelligence implemented, is an example of an scanning automated workflow allowing for the prevention of unnecessary interaction between technicians and patients. The patient positioning algorithm will capture the patient’s pose. 

Segmentation is crucial in image processing and analysis in order to assess COVID-19 as it covers the region of interest (ROIs) (organs that are affected by COVID-19/ infected areas). CT produces high-quality 3D images, and ROIs can be segmented into it. Proposals such as human knowledge,and machine learning methods can be integrated with a segmentation network in order to allow for adequate training data for segmentation tasks. Image segmentation allows radiologists to accurately identify lung infection, and analyzing and diagnosing COVID-19.

Patients that are suspected of COVID-19 are in need of diagnosis and treatment, and with COVID-19 being similar to pneumonia, in which AI-assisted diagnosis using medical images can be highly beneficial. Deep learning models were proposed such as ResNet50 to detect COVID-19 through X-ray images. The ResNet50 model contains two tasks: classification between COVID/non-COVID and anomaly detection (allows for optimization of the COVID-19 score that is used for classification). Studies have separated COVID-19 patients from non-COVID-19 patients, with the help of artificial intelligence and the reading time of radiologists was reduced by 65%.

With many studies proposing CT-based COVID-19 diagnosis show promising results, it is important for early detection and predictions of severity. It is challenging for artificial intelligence to be used in a procedure regarding the incubation period and infectivity. X-rays and CT scans are not often available for COVID-19 applications which slows down any artificial intelligence methods from continually being researched and developed.

Reference

F. Shi et al., “Review of Artificial Intelligence Techniques in Imaging Data Acquisition, Segmentation, and Diagnosis for COVID-19,” in IEEE Reviews in Biomedical Engineering, vol. 14, pp. 4-15, 2021, doi: 10.1109/RBME.2020.2987975.

Summary of Dixon Jr.’s “Artificial Intelligence: Benefits and Unknown Risks”

TO: Professor Ellis 

FROM: Shoron Reza 

DATE: 02/28/2021 

SUBJECT: 500-Word Summary of Article About Artificial Intelligence  

The following is a 500-word summary of a journal article about the benefits of artificial intelligence as well as unknown risks in context of the judicial system. The author highlights that AI is a technological advancement that has substantially influenced the world of law and criminal justice. He also emphasizes that there have been a few AI inaccuracies which have caused red flags in the past. The article focuses on artificial intelligence related to eDiscovery, predictive policing, crime solving, risk assessment and the judicial use of AI in court cases. It also indicates the development of AI facial recognition software to identify prime suspects and criminals. Artificial Intelligence has been a great help to society and its people however, there are many risk factors that have not yet been officially acknowledged.

AI has the ability to be efficient and effective with resources to decrease the work of manual laborers as well as increase the comfort of people’s lives.AI has made a significant impact in the world of law enforcement by assisting them with facial recognition and crime prediction algorithms. With the help of AI, police officers have had an easier time to recognize suspected criminals and also calculate the possibility of a perpetrator committing a crime once again. Additionally, Al-based facial recognition software has been employed to identify suspects from images caught on security cameras, cell phones, and other video sources (Dixon 2021). Law enforcement agencies use Predpol, which is an AI algorithm that contributes in foreseeing crimes for certain areas on a daily basis. Researchers are also introducing algorithms to determine class of weapons based on gunshot audio.These algorithms will be able torecreate crime scenes by using certain data to help investigators improve their understanding of an event.COMPAS is an AI driven assessment tool to reassess defendants in criminal cases.

In recent events, the Department of Justice discovered repetitive law violations by the police regarding the use of extreme force against Black people, minority groups and failing to report women violence issues. Evidence indicates that the developers of the algorithm did not provide accurate historical data. Supporters of AI think that this is the only way to decrease human error and bias in official courts. However, studies show that AI has actually revealed bias in the past, mistakenly claiming Black defendants as potential criminals more than white defendants. There was a study conducted in 2017, where they recruited volunteers online, asking them to predict whether certain individuals are likely to repeat a crime. The crowdsourced predictions were as precise as COMPAS at predicting repeated offenders. Although AI algorithms are great at recognizing consistency in data and are able to generate predictable results, consistency and predictions are not the same as being fair. Technologies and machines struggle to operate in a world where biases and prejudices exist. Overall, AI plays an important increasing role in our lives and the criminal justice system. However, it can sometimes do more harm than good and we as a society need to re-evaluate the uses of AI.  

Reference 

Dixon Jr., R.). J. H. B. (2021). Artificial Intelligence: Benefits and Unknown Risks. Judges’ Journal, 60(1), 41–43. 

Summary of Yuzhao Wu’s ” Cloud Storage Security Assessment Through Equilibrium Analysis”

TO: Prof. Ellis

FROM: Mahir Faisal

DATE: 03/03/2021

SUBJECT: 500-Word summery of Article about Cloud Storage Security Assessment Through Equilibrium Analysis

The following is a 500-word summery of a peer-reviewed article about essential Security Analysis of cloud storage solution. The authors discussed about how cloud providers and third-party providers can provide strong security measurements and effective data protection to make the data more secured and reliable in cloud infrastructure. Cloud storage solution has been widely used by the companies and enterprises to put their data and information to the cloud servers. Users can upload their data on the cloud and access to the data without having any issues. however, as user data contains confidential information Network attackers target third-party cloud service providers to hack the user data. Some methods and schemes have been proposed for risk assessment of the cloud which will help cloud providers to act as a defender of security. However, Users cannot have full trust in These service providers because they may ensure the integrity and confidentiality of the data, but they may have accessed the content of the data. For example- Cloud service providers are responsible for the security of the data whereas cloud infrastructure providers make resources available on the cloud, they do not do security assessments as cloud service providers do. There is a chance of having conflicts of benefits between attackers and defenders. This conflict of benefits may drive users to think that cloud providers have a lack of appropriate assessment mechanisms. Some Third-party Service providers serve security services to cloud providers by encrypting user data. But the benefit conflicts with cloud providers and users make them semi-trustworthy the same as cloud providers. According to the author’ each person’s benefit is determined by the security level of the whole system.” (Wu et al., 2019, p. 739). If the layer of security is strong, then an attacker needs to solve security mechanisms one by one which will become difficult for the attackers to decrypt it. Another point to be noted that decision-makers can be divided into attackers and defenders, Users and cloud providers can act as attackers or defenders. However, to solve this issue, game theory offers tools and models help decision-makers to make a strategy. study shows that by assessing the security of public cloud storage providers and third-party mediators through equilibrium analysis. To be precise, we conduct evaluations and assessments on a series of game models between public cloud storage providers and users to analyze the security of different services. By using the game theory model, users can analyze the risk of whether their private data is likely to be hacked by the cloud service providers. Moreover, Cloud service providers can make effective strategies to improve their service and make it more trustworthy. For example- if a cloud service provider uses a Nash equilibrium strategy and would not steal user data then the cloud system has effective internal security and confidentiality to the user data and privacy. A semi trustworthy Third-party Service providers will give additional security to the user data if users have trust in Third-party providers as same as Cloud service providers. I believe that cloud providers should emphasize strong security measurements and assessment mechanisms to protect the confidentiality and integrity of user data.

Reference

Y. Wu., Y. Lyu., & Y. Shi. (2019) “Cloud storage security assessment through equilibrium analysis,” in Tsinghua Science and Technology, vol. 24, no. 6, pp. 738-749,    Dec.2019,  DOI :   10.26599/TST.2018.9010127

Summary of Tessema Mengitsu’s “A “No Data Center” Solution to Cloud Computing”

TO: Professor Ellis  

FROM: Alexander Rossler 

DATE: 02/24/2021 

SUBJECT: 500 Word Summary of “A “No Data Center” Solution to Cloud Computing”

    Cloud Computing services are extremely helpful to many, but are complex and expensive to begin as a new company. To begin with, Cloud Computing is the availability of computer resources over the internet, this could be anything from storage to softwares being streamed straight to another device.  Of course there are consumer level options, things like Google Drive and even Adobe Creative Cloud, when working at a company level, having your own private Cloud Service for employees and others involved is much more professional and organized. On top of this, having your own control of something this important makes it all the safer, on top of whatever other forms of security you incorporate into it. Developing your own private Cloud Computing service can be expensive and time consuming, but in the long run would be quite inexpensive compared to services that charge extremely high fees depending on the hardware you require from them. You would be able to cut out the middleman pricing, and handle everything on your own for a much more affordable price. 

Instead of setting up data centers filled with expensive servers, we could instead use the Credit Union Cloud Model (CUCM) which would allow for us to use resources from computers that are either overabundant in resources or not being used currently. This model of sharing resources across multiple computers is not specifically only for the cloud computing model, we’ve seen it used for other missions in the past. This includes things like allocating small amounts of GPU power to solve complex algorithms being studied by scientists. For our CUCM though, would still need one or more machines dedicated to managing the volunteered machines (Member Nodes) which would be considered the server for the Cloud. This would be the only set in stone permanent machines in the model, considering any one computer can opt out of the model at a given point in time. In order to make this Cloud Model work, there needs to be software installed on all nodes involved, including the Member Nodes and the Management Node(s) allowing for the resources to be managed and allocated properly. There are quite a few decisions to make in-terms of which software to choose, with different optimizations to different hardware being utilized. Deciding on what program to use is mainly dependent on what hardware you decide you will be using as a resource manager and allocator. 

References:

T. Mengistu, A. Alahmadi, A. Albuali, Y. Alsenani and D. Che, “A “No Data Center” Solution to Cloud Computing,” 2017 IEEE 10th International Conference on Cloud Computing (CLOUD), Honolulu, CA, 2017, pp. 714-717, doi: 10.1109/CLOUD.2017.99.

Summary of Hitoshi Oi’s “Evaluation of Ryzen 5 and Core i7 Processors with SPEC CPU 2017”

TO: Prof. Ellis

From: Angel Rojas

DATE: Feb 27 2021

Subject: 500-word Summary

The paper is a summary of a series of benchmark tests between two major processor companies putting their high-end chips to their test. Over the years Intel and Advanced Micro Devices have been on the leading edge of competition throughout the years while also collaborating. Although there are speculations on which side surfaced the new technology first the first breakthrough would be when Advanced Micro Devices settled with 64 bits computing on the x86 instruction set which was then purchased by Intel. Since then, each generation of processors have been researched and developed but have we reached a limit on microprocessors. It has been difficult to fit more transistors on a die as there is a physical limit, this is following Moore’s law as we noticed there will be a limit on transistors we can fit in a single integrated circuit. This is due to the material that these processors are made in which is silicon.

Advanced Micro Devices (AMD) have been on the CPU market against Intel and in recent events AMD has released the Zen microcomputer architecture, in which the Ryzen processors have been established. AMD has reached the same number of physical cores they can fit in a package as Intel. They were tested each in benchmark programs to put the chips to the most output. Both chips utilize the same x86 instruction set and are manufactured with 14nm transistors. Ryzen has been equipped with more cache than its Intel counterpart. Ryzen processors are loaded with less output ports on the scheduler which is the creating a functioning system of setting processes in order of execution and priority. While the Intel chip has more outputs built in, its scheduler is unidentified and no information about it has been available. The Intel chip has a function called turbo-boost which increases the clock speed at a certain threshold unlike the Ryzen chip which is locked.

            Software that was used was SPEC a benchmark testing program with multiple test scenarios that can exhibit real life stimulations of utilizing CPU power. The series of tests consists of how fast the CPU can process the instructions as if it were in a real-life simulation. Both test benches utilized the same Linux OS (Ubuntu) and had the same amount of DDR4 RAM which was 16GB. The test results show that both excel performance however the intel chip was consuming more energy than Ryzen. Multithreaded stimulations were difficult to properly exam due to synchronization. After tests it shows that Ryzen performs better in multi-threaded tasks than the Intel’s Core CPU and consumes more energy. In conclusion, Intel’s 8th gen i7 processor outperformed in tests but consumed more energy than Ryzen. Both chips performed the same but there are differences in the way they are manufactured

Reference: H. Oi, “Evaluation of Ryzen 5 and Core i7 Processors with SPEC CPU 2017,” 2019 IEEE International Systems Conference (SysCon), Orlando, FL, USA, 2019, pp. 1-6, doi: 10.1109/SYSCON.2019.8836790

Summary of “Cloud Computing in Amazon and Microsoft Azure platforms: Performance and service comparison”

The following is a 500 words summary of a comparison of cloud computing performance services in Amazon and Microsoft platforms. Cloud computing is a technology that enables the flexibility to access a wide range of system computer resources by operating independently. The main advantage of using cloud computing is that there is an initial investment in the information system. The main obstacle of cloud computing is that it is a distributed technology in the global market. We live in territorially limited jurisdictions, making it hard to protect the data and confidentiality.

First, Extensive research has been carried out to assert methodology and evaluation of service performance. Therefore, this paper compared the two cloud computing platforms by looking at the cloud computing service’s architecture and categories and related work. The cloud computers architecture mode is based on entities; front end, representing users controllable infrastructure characteristics; back end, representing cloud providers infrastructure; central server, responsible for service management, traffic, and clients request; hardware layer, responsible for managing cloud physical cloud resources; virtualization layer, which includes computing and storage resources; layer platform, consists of applied operating system and application framework; application layer, this layer is the top of clouding architecture hierarchy layers, it consists of up to date cloud applications.

Second, All these layers provide three categories of services; infrastructure as-service (IaaS), which is a service provision model that outsource organizational equipment to support users operations such as storage; platform-as-a-service(PaaS), this platform allows the user to rent virtualized servers and associate services for the user of the existing application; software-as-a-service, which represents software distribution model that relies on the applications hosted by the service provider and their availability and global accessibility to the user via a specific internet.

Third, Microsoft Azure has a very flexible cloud platform that allows the users to develop applications and manage their data by using Microsoft data center network. Any technological tool can be used to integrate public cloud computing with the available IT environment. To subscribe to Azure, it is necessary to use some Microsoft Live accounts such as Outlook and credit cards. The service’s main features include low price, high speed, scalability, openness, adaptability, and guaranteed security. Mazon provides online services to other websites or client applications; thus, most of these services are not available to end-users but instead allow the developers the use and advantage of Amazon platform functionalities while developing their applications.

In conclusion, After reviewing virtual machines on Microsoft and Azure cloud computing performance, it was concluded that, when it comes to performances, test results give a slight advantage to the MS Azure platform when CPU and disk-intensive operations are concerned. However, memory tests give one step up for the AWS test system. It is understandable, considering that this MS Azure instance of virtual hardware is stronger, confirmed by the obtained test results. And when it comes to managing virtual resources, the results favored Amazon since it offers more fine system tuning features and gives more options oriented to working Linux virtual machines.

References

B. S. Đorđević, S. P. Jovanović and V. V. Timčenko, “Cloud Computing in Amazon and Microsoft Azure platforms: Performance and service comparison,” 2014 22nd Telecommunications Forum Telfor (TELFOR), Belgrade, Serbia, 2014, pp. 931-934, doi: 10.1109/TELFOR.2014.7034558. https://ieeexplore-ieee-org.citytech.ezproxy.cuny.edu/document/7034558/citations#citations