Summary of Kandan et al.’s “Network attacks and prevention techniques – A study”

TO: Prof. Ellis

FROM: Jerry Chen

DATE: 3/3/2021

SUBJECT: 500-Word Summary of Article About Network Attacks and Preventions

The following is a 500-word summary of a peer-reviewed article about the type of attack and its prevention in nowadaysā€™ s network. The authors discuss the types of network attacks that currently exist as most of the people or small businesses still didnā€™t aware of the importance of configuring their network gears, which open their doors to welcoming the attacks. According to the authors, ā€œAny data passes over large number of workstations and routers which sometimes very weak due to organizational structures and their policies which may lead to damages and attacksā€ (Kandan et al., 2019, p. 2). For network security, there are two types of network securities, such as hardware security and software security. Hardware security is like the defensive system, which is often used in corporations and software security is application-based, which is only for the individual or small firm used. As mentioned by the authors, ā€œif the system is not implemented the proper security methods and control over their network, then there is a way for attacks from internal or external using these techniquesā€ (Kandan et al., 2019, p. 2).  There are some major types of attacks that attackers used most frequently nowadays, such as browser attacks, man-in-the-middle attacks (MITM), and botnets.

Browser attack is the most frequent web browser base type of attack that the attacker uses to hack into the system by adding malware to the browser. Man in the middle attack (MITM) is another attack that the attacker uses to interrupts the confidential data during the transmission process of two victims and access to the data without the awareness of victims. The botnet is a different type of attack, which is the formation of robot and network, and it is one of the main attacks that attacker uses to gather unauthorize confidential data from the users.

The problems always come with solutions, as well as the attack techniques. According to the authors (Kandan et al., 2019, p. 4), there are some preventions which born to prevent users from getting those attacks, such as the prevention of man in the middle (MITM), HTTPS, and the prevention of botnet. To prevent getting MITM attack, the two endpoints should use the higher secure network when communicating and encrypted the transmission by using any encrypt protocol (Radhakishan et al., 2011). HTTPS is the prevention which is uses to protect users from getting browser attacks by providing a higher secure network over the browser by issuing the certificates to only the participating entities and verified at each party before the transmission. Moreover, to prevent getting botnet kind of attack, the user should make sure the intrusion system is up to date and specifically configure the ports or shut down the ports that not currently in use.

As network security constantly changing every day, attackers are always using their tech knowledge to explore new types of attacks to fulfill their purposes. People or small businesses should always configure their network gears and install security software to monitoring the attacks to decrease the chance of being attack.

Reference

Kandan, A. M., Kathrine, G. J. W., & Melvin, A. R. (2019). Network attacks and prevention techniques – A study. IEEE International Conference on Electronics, Communication and Computing Technologies (ICECCT), pp. 2,4. https://doi.org/ 10.1109/ICECCT.2019.8869077

Radhakishan, V., & Selvakumar, S. (2011). Prevention of man-in-the-middle attacks using id-based signatures. Second International Conference on Networking and Distributed Computing. https://doi.org/ 10.1109/icndc.2011.40

Summary of Feng Shi’s et al. “Review of Artificial Intelligence Techniques in Imaging Data Acquisition Segmentation, and Diagnosis for COVID-19”

TO: Prof. Ellis

FROM: Neil Domingo

DATE: 3/3/2021

SUBJECT: 500-Word Summary of Article About Utilizing Artificial Intelligence In Fighting COVID-19

The following is a 500-word summary of a peer-reviewed article about the use of artificial intelligence in medical imaging during the COVID-19 pandemic. The journal’s goal is to further discuss the use of medical imaging with artificial intelligence in fighting against COVID-19 and discuss machine learning methods in the imaging workflow. Medical imaging such as CT scans and X-rays have been found to play a critical role in restraining the transmission of COVID-19. CT scans is one of the imaging-based diagnoses that is used for COVID-19 and includes three stages: pre-scan acquisition, image acquisition, and disease diagnosis.  Artificial Intelligence contributes to the fight against COVID-19 as it allows for safer, accurate, and efficient imaging solutions. Imaging facilities, and workflows should be considered important to reduce the risks and save lives from COVID-19. According to authors “AI-empowered image acquisition can significantly help automate the scanning procedure and also reshape the workflow with minimal contact to patients, providing the best protection to the imaging technicians” (F. Shi et al., 2020, pg 4). The use of contactless and imaging acquisition is necessary to reduce the risks of technicians and patients being infected as there is contact between them. Artificial intelligence can be used to help the contactless scanning as it will be able to identify the pose and shape of a patient by using data from visual sensors. Scan range, the start and end point of a CT scan, can be estimated by the use of visual sensors with artificial intelligence, and scanning efficiency can be improved.  A mobile CT platform with artificial intelligence implemented, is an example of an scanning automated workflow allowing for the prevention of unnecessary interaction between technicians and patients. The patient positioning algorithm will capture the patient’s pose. 

Segmentation is crucial in image processing and analysis in order to assess COVID-19 as it covers the region of interest (ROIs) (organs that are affected by COVID-19/ infected areas). CT produces high-quality 3D images, and ROIs can be segmented into it. Proposals such as human knowledge,and machine learning methods can be integrated with a segmentation network in order to allow for adequate training data for segmentation tasks. Image segmentation allows radiologists to accurately identify lung infection, and analyzing and diagnosing COVID-19.

Patients that are suspected of COVID-19 are in need of diagnosis and treatment, and with COVID-19 being similar to pneumonia, in which AI-assisted diagnosis using medical images can be highly beneficial. Deep learning models were proposed such as ResNet50 to detect COVID-19 through X-ray images. The ResNet50 model contains two tasks: classification between COVID/non-COVID and anomaly detection (allows for optimization of the COVID-19 score that is used for classification). Studies have separated COVID-19 patients from non-COVID-19 patients, with the help of artificial intelligence and the reading time of radiologists was reduced by 65%.

With many studies proposing CT-based COVID-19 diagnosis show promising results, it is important for early detection and predictions of severity. It is challenging for artificial intelligence to be used in a procedure regarding the incubation period and infectivity. X-rays and CT scans are not often available for COVID-19 applications which slows down any artificial intelligence methods from continually being researched and developed.

Reference

F. Shi et al., “Review of Artificial Intelligence Techniques in Imaging Data Acquisition, Segmentation, and Diagnosis for COVID-19,” in IEEE Reviews in Biomedical Engineering, vol. 14, pp. 4-15, 2021, doi: 10.1109/RBME.2020.2987975.

Summary of Dixon Jr.’s “Artificial Intelligence: Benefits and Unknown Risks”

TO: Professor EllisĀ 

FROM: Shoron RezaĀ 

DATE: 02/28/2021 

SUBJECT: 500-Word Summary of Article About Artificial Intelligence  

The following is a 500-word summary of a journal article about the benefits of artificial intelligence as well as unknown risks in context of the judicial system. The author highlights that AI is a technological advancement that has substantially influenced the world of law and criminal justice. He also emphasizes that there have been a few AI inaccuracies which have caused red flags in the past. The article focuses on artificial intelligence related to eDiscovery, predictive policing, crime solving, risk assessment and the judicial use of AI in court cases. It also indicates the development of AI facial recognition software to identify prime suspects and criminals. Artificial Intelligence has been a great help to society and its people however, there are many risk factors that have not yet been officially acknowledged.

AI has the ability to be efficient and effective with resources to decrease the work of manual laborers as well as increase the comfort of peopleā€™s lives.AI has made a significant impact in the world of law enforcement by assisting them with facial recognition and crime prediction algorithms. With the help of AI, police officers have had an easier time to recognize suspected criminals and also calculate the possibility of a perpetrator committing a crime once again. Additionally, Al-based facial recognition software has been employed to identify suspects from images caught on security cameras, cell phones, and other video sources (Dixon 2021). Law enforcement agencies use Predpol, which is an AI algorithm that contributes in foreseeing crimes for certain areas on a daily basis. Researchers are also introducing algorithms to determine class of weapons based on gunshot audio.These algorithms will be able torecreate crime scenes by using certain data to help investigators improve their understanding of an event.COMPAS is an AI driven assessment tool to reassess defendants in criminal cases.

In recent events, the Department of Justice discovered repetitive law violations by the police regarding the use of extreme force against Black people, minority groups and failing to report women violence issues. Evidence indicates that the developers of the algorithm did not provide accurate historical data. Supporters of AI think that this is the only way to decrease human error and bias in official courts. However, studies show that AI has actually revealed bias in the past, mistakenly claiming Black defendants as potential criminals more than white defendants. There was a study conducted in 2017, where they recruited volunteers online, asking them to predict whether certain individuals are likely to repeat a crime. The crowdsourced predictions were as precise as COMPAS at predicting repeated offenders. Although AI algorithms are great at recognizing consistency in data and are able to generate predictable results, consistency and predictions are not the same as being fair. Technologies and machines struggle to operate in a world where biases and prejudices exist. Overall, AI plays an important increasing role in our lives and the criminal justice system. However, it can sometimes do more harm than good and we as a society need to re-evaluate the uses of AI.  

Reference 

Dixon Jr., R.). J. H. B. (2021). Artificial Intelligence: Benefits and Unknown Risks. Judgesā€™ Journal, 60(1), 41ā€“43. 

Summary of Yuzhao Wu’s ” Cloud Storage Security Assessment Through Equilibrium Analysis”

TO: Prof. Ellis

FROM: Mahir Faisal

DATE: 03/03/2021

SUBJECT: 500-Word summery of Article about Cloud Storage Security Assessment Through Equilibrium Analysis

The following is a 500-word summery of a peer-reviewed article about essential Security Analysis of cloud storage solution. The authors discussed about how cloud providers and third-party providers can provide strong security measurements and effective data protection to make the data more secured and reliable in cloud infrastructure. Cloud storage solution has been widely used by the companies and enterprises to put their data and information to the cloud servers. Users can upload their data on the cloud and access to the data without having any issues. however, as user data contains confidential information Network attackers target third-party cloud service providers to hack the user data. Some methods and schemes have been proposed for risk assessment of the cloud which will help cloud providers to act as a defender of security. However, Users cannot have full trust in These service providers because they may ensure the integrity and confidentiality of the data, but they may have accessed the content of the data. For example- Cloud service providers are responsible for the security of the data whereas cloud infrastructure providers make resources available on the cloud, they do not do security assessments as cloud service providers do. There is a chance of having conflicts of benefits between attackers and defenders. This conflict of benefits may drive users to think that cloud providers have a lack of appropriate assessment mechanisms. Some Third-party Service providers serve security services to cloud providers by encrypting user data. But the benefit conflicts with cloud providers and users make them semi-trustworthy the same as cloud providers. According to the authorā€™ each personā€™s benefit is determined by the security level of the whole system.ā€ (Wu et al., 2019, p. 739). If the layer of security is strong, then an attacker needs to solve security mechanisms one by one which will become difficult for the attackers to decrypt it. Another point to be noted that decision-makers can be divided into attackers and defenders, Users and cloud providers can act as attackers or defenders. However, to solve this issue, game theory offers tools and models help decision-makers to make a strategy. study shows that by assessing the security of public cloud storage providers and third-party mediators through equilibrium analysis. To be precise, we conduct evaluations and assessments on a series of game models between public cloud storage providers and users to analyze the security of different services. By using the game theory model, users can analyze the risk of whether their private data is likely to be hacked by the cloud service providers. Moreover, Cloud service providers can make effective strategies to improve their service and make it more trustworthy. For example- if a cloud service provider uses a Nash equilibrium strategy and would not steal user data then the cloud system has effective internal security and confidentiality to the user data and privacy. A semi trustworthy Third-party Service providers will give additional security to the user data if users have trust in Third-party providers as same as Cloud service providers. I believe that cloud providers should emphasize strong security measurements and assessment mechanisms to protect the confidentiality and integrity of user data.

Reference

Y. Wu., Y. Lyu., & Y. Shi. (2019) “Cloud storage security assessment through equilibrium analysis,” in Tsinghua Science and Technology, vol. 24, no. 6, pp. 738-749,    Dec.2019,  DOI :   10.26599/TST.2018.9010127

Summary of Tessema Mengitsu’s “A “No Data Center” Solution to Cloud Computing”

TO: Professor Ellis  

FROM: Alexander Rossler 

DATE: 02/24/2021 

SUBJECT: 500 Word Summary of “A “No Data Center” Solution to Cloud Computing”

    Cloud Computing services are extremely helpful to many, but are complex and expensive to begin as a new company. To begin with, Cloud Computing is the availability of computer resources over the internet, this could be anything from storage to softwares being streamed straight to another device.  Of course there are consumer level options, things like Google Drive and even Adobe Creative Cloud, when working at a company level, having your own private Cloud Service for employees and others involved is much more professional and organized. On top of this, having your own control of something this important makes it all the safer, on top of whatever other forms of security you incorporate into it. Developing your own private Cloud Computing service can be expensive and time consuming, but in the long run would be quite inexpensive compared to services that charge extremely high fees depending on the hardware you require from them. You would be able to cut out the middleman pricing, and handle everything on your own for a much more affordable price. 

Instead of setting up data centers filled with expensive servers, we could instead use the Credit Union Cloud Model (CUCM) which would allow for us to use resources from computers that are either overabundant in resources or not being used currently. This model of sharing resources across multiple computers is not specifically only for the cloud computing model, weā€™ve seen it used for other missions in the past. This includes things like allocating small amounts of GPU power to solve complex algorithms being studied by scientists. For our CUCM though, would still need one or more machines dedicated to managing the volunteered machines (Member Nodes) which would be considered the server for the Cloud. This would be the only set in stone permanent machines in the model, considering any one computer can opt out of the model at a given point in time. In order to make this Cloud Model work, there needs to be software installed on all nodes involved, including the Member Nodes and the Management Node(s) allowing for the resources to be managed and allocated properly. There are quite a few decisions to make in-terms of which software to choose, with different optimizations to different hardware being utilized. Deciding on what program to use is mainly dependent on what hardware you decide you will be using as a resource manager and allocator. 

References:

T. Mengistu, A. Alahmadi, A. Albuali, Y. Alsenani and D. Che, “A “No Data Center” Solution to Cloud Computing,” 2017 IEEE 10th International Conference on Cloud Computing (CLOUD), Honolulu, CA, 2017, pp. 714-717, doi: 10.1109/CLOUD.2017.99.

Summary of Hitoshi Oi’s “Evaluation of Ryzen 5 and Core i7 Processors with SPEC CPU 2017”

TO: Prof. Ellis

From: Angel Rojas

DATE: Feb 27 2021

Subject: 500-word Summary

The paper is a summary of a series of benchmark tests between two major processor companies putting their high-end chips to their test. Over the years Intel and Advanced Micro Devices have been on the leading edge of competition throughout the years while also collaborating. Although there are speculations on which side surfaced the new technology first the first breakthrough would be when Advanced Micro Devices settled with 64 bits computing on the x86 instruction set which was then purchased by Intel. Since then, each generation of processors have been researched and developed but have we reached a limit on microprocessors. It has been difficult to fit more transistors on a die as there is a physical limit, this is following Mooreā€™s law as we noticed there will be a limit on transistors we can fit in a single integrated circuit. This is due to the material that these processors are made in which is silicon.

Advanced Micro Devices (AMD) have been on the CPU market against Intel and in recent events AMD has released the Zen microcomputer architecture, in which the Ryzen processors have been established. AMD has reached the same number of physical cores they can fit in a package as Intel. They were tested each in benchmark programs to put the chips to the most output. Both chips utilize the same x86 instruction set and are manufactured with 14nm transistors. Ryzen has been equipped with more cache than its Intel counterpart. Ryzen processors are loaded with less output ports on the scheduler which is the creating a functioning system of setting processes in order of execution and priority. While the Intel chip has more outputs built in, its scheduler is unidentified and no information about it has been available. The Intel chip has a function called turbo-boost which increases the clock speed at a certain threshold unlike the Ryzen chip which is locked.

            Software that was used was SPEC a benchmark testing program with multiple test scenarios that can exhibit real life stimulations of utilizing CPU power. The series of tests consists of how fast the CPU can process the instructions as if it were in a real-life simulation. Both test benches utilized the same Linux OS (Ubuntu) and had the same amount of DDR4 RAM which was 16GB. The test results show that both excel performance however the intel chip was consuming more energy than Ryzen. Multithreaded stimulations were difficult to properly exam due to synchronization. After tests it shows that Ryzen performs better in multi-threaded tasks than the Intelā€™s Core CPU and consumes more energy. In conclusion, Intelā€™s 8th gen i7 processor outperformed in tests but consumed more energy than Ryzen. Both chips performed the same but there are differences in the way they are manufactured

Reference: H. Oi, “Evaluation of Ryzen 5 and Core i7 Processors with SPEC CPU 2017,” 2019 IEEE International Systems Conference (SysCon), Orlando, FL, USA, 2019, pp. 1-6, doi: 10.1109/SYSCON.2019.8836790

Summary of “Cloud Computing in Amazon and Microsoft Azure platforms: Performance and service comparison”

The following is a 500 words summary of a comparison of cloud computing performance services in Amazon and Microsoft platforms. Cloud computing is a technology that enables the flexibility to access a wide range of system computer resources by operating independently. The main advantage of using cloud computing is that there is an initial investment in the information system. The main obstacle of cloud computing is that it is a distributed technology in the global market. We live in territorially limited jurisdictions, making it hard to protect the data and confidentiality.

First, Extensive research has been carried out to assert methodology and evaluation of service performance. Therefore, this paper compared the two cloud computing platforms by looking at the cloud computing service’s architecture and categories and related work. The cloud computers architecture mode is based on entities; front end, representing users controllable infrastructure characteristics; back end, representing cloud providers infrastructure; central server, responsible for service management, traffic, and clients request; hardware layer, responsible for managing cloud physical cloud resources; virtualization layer, which includes computing and storage resources; layer platform, consists of applied operating system and application framework; application layer, this layer is the top of clouding architecture hierarchy layers, it consists of up to date cloud applications.

Second, All these layers provide three categories of services; infrastructure as-service (IaaS), which is a service provision model that outsource organizational equipment to support users operations such as storage; platform-as-a-service(PaaS), this platform allows the user to rent virtualized servers and associate services for the user of the existing application; software-as-a-service, which represents software distribution model that relies on the applications hosted by the service provider and their availability and global accessibility to the user via a specific internet.

Third, Microsoft Azure has a very flexible cloud platform that allows the users to develop applications and manage their data by using Microsoft data center network. Any technological tool can be used to integrate public cloud computing with the available IT environment. To subscribe to Azure, it is necessary to use some Microsoft Live accounts such as Outlook and credit cards. The service’s main features include low price, high speed, scalability, openness, adaptability, and guaranteed security. Mazon provides online services to other websites or client applications; thus, most of these services are not available to end-users but instead allow the developers the use and advantage of Amazon platform functionalities while developing their applications.

In conclusion, After reviewing virtual machines on Microsoft and Azure cloud computing performance, it was concluded that, when it comes to performances, test results give a slight advantage to the MS Azure platform when CPU and disk-intensive operations are concerned. However, memory tests give one step up for the AWS test system. It is understandable, considering that this MS Azure instance of virtual hardware is stronger, confirmed by the obtained test results. And when it comes to managing virtual resources, the results favored Amazon since it offers more fine system tuning features and gives more options oriented to working Linux virtual machines.

References

B. S. Đorđević, S. P. Jovanović and V. V. Timčenko, “Cloud Computing in Amazon and Microsoft Azure platforms: Performance and service comparison,”Ā 2014 22nd Telecommunications Forum Telfor (TELFOR), Belgrade, Serbia, 2014, pp. 931-934, doi: 10.1109/TELFOR.2014.7034558. https://ieeexplore-ieee-org.citytech.ezproxy.cuny.edu/document/7034558/citations#citations

Summary of Yew-Soon Ong and Abhishek Guptaā€™s ā€œFive Pillars of Artificial Intelligence Researchā€

TO: Prof. Ellis

From: Pranta Dutta

DATE: Feb 25 2021

Subject: 500-word Summary

  The following is a 500-word summary of a peer-reviewed article about Five Pillars of Artificial Intelligence Research by Yew-Soon Ong and Abhishek Gupta. In the article, the authors discuss the basics elements of artificial life for sustainable AI. According to the authors, ā€œDue to the accelerated development of AI technologies witnessed over the past decade, there is increasing consensus that the field is primed to have a significant impact on society as a wholeā€ (Ong & Abhishek, 2019, p. 411).

    The goal of Artificial intelligence (AI) was for machines to have equal intelligence to humans. However, it has surpassed that goal. With the help of machine learning, AI has managed to conquer human intelligence, such as IBM Watson winning the game of Jeopardy or the Alpha Zero algorithm defeating a world champion in a game of chess. Because of this, people believe that AI will have a significant impact on society. It has the potential to improve human decision-making in healthcare, economics, and governance. However, there are some challenges it must overcome.

    The first, rationalizability of AI systems, A part of machine learning is deep natural learning. To an Artificial Intelligence, this is like a human brain. Its main criticism was that it was vague. Even though it accomplished remarkable predictions, it could not explain why certain inputs led to the projected output. It would need to have the ability to rationalize its interpretations and explanations. Otherwise, it could compromise the safety of lives where critical decisions are very crucial.

    The second, the resilience of AI systems; Artificial Intelligence has passed human intelligence in some instances; however, it still lacks common sense. This means that it can be easily misleading. For example, if someone adds black and white stickers to a stop sign, AI may interpret it as a speed limit. This error can cause a traffic jam or an accident.

Third, reproducibility of AI Systems. To maintain the integrity of AI applications it is necessary to ensure reproducibility by designing & complying with standardize software requirements. One obstacle is the vast number of hyperparameters. Without experts in the hyperparameter selection, it may lead to poor results of the trained model. The community for open-source software development in Ai is growing. But there still is a need for software standards to be specified.

    Fourth, the realism of AI systems. AI has shown great strength when it comes to human interaction, but one of the challenges AI faces is the development of a system. For example, every human has a different way of expressing themselves, such as speech, body language, and facial expressions. For AI to integrate well, it must develop its traits and personality.

    Fifth, Responsibility of AI systems; As powerful as AI is at the moment, it still needs to have some level of responsibility. Without it, artificial intelligence can take over the world. AI will have to be programmed to comply with ethics and laws.

    To conclude, AI has made a lot of progress in the time it has had. Given what we have seen, it will be incorporated into our society without a doubt. However, it is essential to make sure that AI covers all of the concepts mentioned above, from rationalizability to responsibility to function reliably and ethically in everyday life

References

Ong, Y.-S., & Gupta, A. (2019). AIR5: Five Pillars of Artificial Intelligence Research. IEEE Transactions on Emerging Topics in Computational Intelligence, 3(5), 411ā€“415. https://doi.org/10.1109/tetci.2019.2928344

Outline for Expanded Definition Project, Week 4

During this week’s lecture, I discussed the following outline as a good model for you to follow while creating your own Expanded Definition essay. A good rule of thumb for your quoted material would be at least 2 cited definitions and 3 cited contextual sentences, but you might find having more definitions and more contextual sentences strengthen your essay. Remember to discuss, explain, and compare/contrast the quotes that you find to help your reader understand how these all relate to one another before endeavoring to write your working definition at the end of your essay.

Your Name's Expanded Definition of YOUR TERM

TO: Prof. Jason Ellis
FROM: Your Name
DATE: Due Date
SUBJECT: Expanded Definition of YOUR TERM

Introduction [Heading Level 2]
What is the purpose of this document? What term are you defining? How are you discussing the way it is defined and the way it is used in context? Describe a road map for what follows (definitions and context). This content should be published as paragraphs, unlike the heading for this section, which is a level 2 heading.

Definitions [Heading Level 2]
Quote several definitions of the term that you selected. Provide quotes and parenthetical citations for each definition, and include your sources in the References section at the end of the document. Each definition that you include deserves discussion in your words about what it means and how it relates to the other definitions that you include. Consider how they are alike, how are they different, who might use one versus another, etc.

Context [Heading Level 2]
Quote several sentences from a variety of sources that use the term in context. A range of sources would provide the best source material for your discussion of how the term is used in these contexts. For example, a quote from an academic journal or two, a quote from a newspaper or magazine, a quote from a blog, and a quote from social media would give you a range of uses that might have different audiences. For each quote, you should devote at least as much space as the quote discussing what it means in that context and how it relates to the other quotes in context. Each quote should be in quotes, have a parenthetical citation, and a bibliographic entry in your references at the end of your document.

Working Definition [Heading Level 2]
Based on the definitions that you quoted and discussed, and the contextual uses of the term that you quoted and discussed, write a working definition of the term that's relevant to your career field or major, which you will need to identify (this is the specific context for your working definition).

References [Heading Level 2]
Order your APA-formatted bibliographic references by the author's last name, alphabetically. In your posted version, they do not need a hanging indent. And, they should not be in a bulleted list.

How to Submit Your 500-Word Summary, Week 4

Refer to this week’s lecture for more details on how to post your 500-Word Summary project to our OpenLab Course Site.

Below, I am including some screenshots to guide you through the process of creating a post for your 500-Word Summary.

To begin your own Post, login to OpenLab, navigate to our Course Site, mouseover the "+" icon, and click "Post."

To begin your own Post, login to OpenLab, navigate to our Course Site, mouseover the “+” icon, and click “Post.”

Before typing anything, look under Categories on the right and add a check next to "500-Word Summary."

Before typing anything, look under Categories on the right and add a check next to “500-Word Summary.”

Click in the "Add Title" section to enter your title (e.g., Summary of Lin's "3D Layering of Integrated Circuits"). Then, click in the "Start Writing" area and copy-and-paste your 500-Word Summary memo from your word processor into this area.

Click in the “Add Title” section to enter your title (e.g., Summary of Lin’s “3D Layering of Integrated Circuits”). Then, click in the “Start Writing” area and copy-and-paste your 500-Word Summary memo from your word processor into this area.

After copyediting your work to ensure everything is as you want it to be, click on "Publish" and then click "Publish" on the next screen. Verify that your post is live on the site by clicking on "ENG2575 Technical Writing" at the top center to return to our Course Site.

After copyediting your work to ensure everything is as you want it to be, click on “Publish” and then click “Publish” on the next screen. Verify that your post is live on the site by clicking on “ENG2575 Technical Writing” at the top center to return to our Course Site and then click on the down arrow next to Student Projects in the left menu and 500-Word Summary beneath it to see your project posted.