Summary of Cam-Winget et al.’s “Security Flaws in 802.11 Data Link Protocols”

TO: Professor Jason Ellis

FROM: Gladielle Z. Cifuentes

DATE: September 9, 2020

SUBJECT: 500-word summary

This is a 500-word summary of the article “Security Flaws in 802.11 Data Link Protocols” by Nancy Cam-Winget (Cisco Systems), Russ Housley (Vigil Security), David A. Wagner (University of CA at Berkeley) and Jesse Walker (Intel Corp.).It discusses the vulnerabilities that a WLAN experiences by a person who can potentially eavesdrop through radio receivers due to weak security protocols.

Wireless Equivalent Privacy (WEP) is the mechanism that the IEEE 802.11 protocol uses as its standard for data confidentiality. WEP had an array of flaws and would leave Wireless Local Area Networks (WLANs) with security vulnerabilities. This article will describe the flaws of WEP and how researchers went about on finding ways to improve the security or replace WEP. 

WEP has many vulnerabilities and reasons as to why it is not a trustworthy security protocol. Since using WEP is optional, it causes a huge threat to security. This results in encryption of data to never be used. Another defect of WEP is the shared key standard it uses for all devices. According to this article, the most serious security breach that WEP has is how attackers can use cryptanalysis to recover the encryption keys that the WEP uses on its devices. “Once the WEP key is discovered, all security is lost.” (Cam-Winget, Housley, Wagner & Walker, 2003, p. 36). Due to the flaws of WEP, the conclusion is that this security protocol was poorly designed. Experienced security protocol designers and cryptographers are needed for the creation of such difficult security protocol designs. 

A short-term solution to WEP is the creation of Temporal Key Integrity Protocol (TKIP). TKIP are sets of algorithms that “adapt the WEP protocol to address the known flaws while meeting these constraints” (Cam-Winget, Housley, Wagner & Walker, 2003, p. 37). Packet sequencing and Per-Packet key mixing are the functions that TKIP help with the security flaws of WEP for short term purposes. 

A long-term solution that researchers found for WEP security flaws is using the Counter-Mode-CBC-MAC Protocol. For the algorithm of this protocol, the Advanced Encryption system was used. This system contains features that improve the operation of the WEP and its security capabilities which include: single key usage, using integrity protection for packet header/packet payload, reducing latency by allowing precomputation, pipelining and more. In order to meet the criteria for this security protocol, the CCM mode was designed. 

CCM works by merging two techniques such as a counter mode for encryption and the Cipher Block Chaining Message Authentication Code (CBC-MAC). CCM is seen as a vulnerability due to it using the same key for both “confidentiality and integrity” (Cam-Winget, Housley, Wagner & Walker, 2003, p. 39)., It guarantees to never overlap the counter mode with the CBC-MAC vector. 

This article reviewed WEP and the security flaws found. The authors described short-term and long-term alternative protocols that can replace WEP and how they can be implemented for securing a WLAN. 

References:

Cam-Winget, N., Housley, R., Wagner, D., & Walker, J. (2003). Security Flaws in 802.11 Data Link Protocols. Communications of the ACM46(5), 35-39. https://doi.org/10.1145/769800.769823 

Summary of Lee et al.’s “The Role of Openness in Open Collaboration: A Focus on Open‐Source Software Development Projects”

TO: Professor Ellis

FROM: Teodor Barbu

DATE: September 23, 2020

SUBJECT: 500-Word Summary

This memo is a 500-word summary of the article, “The Role of Openness in Open Collaboration: A Focus on Open‐Source Software Development Projects,” by Saerom Lee, Hyunmi Baek, and Sehwan Oh, professors at the University in Deagu and Seoul, Republic of Korea.

Easy access to information facilitated groups of people working on open projects over the internet and innovative companies learned how to exploit this as an important tool. GitHub and SourceForge are two platforms where people can open projects and developers work together to achieve a common goal. Open-source software development (OSSD) became an alternative to getting knowledge from the outside of the organization in a way to benefit both the organizations and developers. In this article the authors conduct an experiment to determine how exploration and exploitation, concepts of organizational learning, impact the development of a project. To effectively tackle the problems of OSSD, developers were separated in categories that either explore the internal resources or exploit the outside knowledge. For this research data was gathered from 17,691 repositories from GitHub. A team of developers, called an organization, can work on one or more projects, and can collaborate with other organizations to complete a project. GitHub encourages collaboration from the outside of an organization as a way of bringing new ideas and solutions and they consider it vital for the survival of an organization. The conductors of his experiment try to establish whichever exploration or exploitation is better for the overall progress of projects. In gathering and analyzing the data was used a web crawler powered by Python focused on GitHub projects bigger than 300 days and with at least five people. The number of commits was considered relevant for developers’ performance.

The results of the experiment show that successful repositories have more external developers but also a dedicated internal team that uses efficiently inside resources in their external interactions. In the cases followed with release software they also notice an increased external collaboration. As the researchers conclude, “we determined that the impact of exploration increases with an increase in exploitation, that is, ambidextrous research has a positive impact on the project performance of an open collaboration over the Internet.” (Lee, Baek, & Oh, 2020, p. 202). Three models reveal that a repository is successful if the number of external collaborations is higher and the performance drops if the number of internal members is higher. As Model 4 monitors software release cases, they found that performance is affected after the release just because all the development is switched to maintenance done by the internal team and external interaction is not mandatory anymore. This experiment demonstrates the importance of free unlimited interaction in OSSD. Exchange of ideas with collaborators outside the team proved to be beneficial for the success of the projects and for the future consistency of the teams.

References

Lee, S., Baek, H., & Oh, S. (2020). The role of openness in open collaboration: A focus on open-source software development projects. ETRI Journal, 42(2), 196–204. https://doi.org/10.4218/etrij.2018-0536

Summary of Alden et al.’s “Cyber Security in the Quantum Era”

TO: Professor Ellis 

FROM: Lia Barbu

DATE: September 23, 2020

SUBJECT: 500-Word Summary

This memo is a 500-word summary of the article, “Cyber Security in the Quantum Era,” by Petros Alden and Elham Kashefi, both professors in the School of Informatics at the University of Edinburg. 

Cybersecurity is essential to protect our systems, and it should be ready for a new computational model as quantum technologies. Quantum theory was by far one of the significant technological developments of the 20th century. A breakthrough will be possible soon due to the research in the field. Quantum computers will be the most valuable quantum technology due to their computational power. Quantum technologies’ achievements already exist Google’s processor “Bristlecone” and satellite quantum communication.

Quantum computers are no longer a myth, and cybersecurity must prepare for this new era. Alden and Kashefi inform us, “Quantum technologies may have a negative effect to cybersecurity, when viewed as a resource for adversaries, but can also have a positive effect, when honest parties use these technologies to their advantage.” (Alden & Kashefi, 2019, p. 121). There are three scenarios: one everything is secure, and the other two explore what new challenges quantum technologies can create. In the first scenario, the honest party has classic technologies, and the adversary has a large quantum computer. In the second scenario, the honest party has limited access to quantum technologies, and the adversary can use any quantum technologies. The third scenario looks in the future: quantum computation devices and the parts implicated in the process would protect their data and be secure. The focus will be on quantum technology’s effects on cryptographic attacks and attacks on the new quantum hardware. 

Even though quantum attacks seem far away; there are three essential rationales why we must address it now: security can hit retroactively, to create secure cryptographic solutions, and to be ready to implement the new technology. Cybersecurity research in post-quantum cryptography is divided into three classes considering adversary use of quantum technology: classic technology with access to an oracle/quantum computer, modification of security definition, and changes required to the new protocol. There are cryptosystems considered secure to a quantum computer attack, and the article considers three issues: confidence, usability, and efficiency. Next is explained what can happen when the adversary can make changes to security notions and what steps should be taken to prevent and stop this action. Quantum rewinding is a technique that adds a mechanism that enforces malicious adversaries to behave as a weak one.

As quantum technologies develop, quantumly protocols should become a reality. Practicality includes research that includes quantum technologies presently achievable. Quantum gadgets open a door for new attacks like side-channels attacks. The security for this is the device-independence that comes with high resources cost. Standardization and protocols should be created for quantum technology. Quantum technology will become a significant part of the computing and communication environment. 

Reference

WALLDEN, P., & KASHEFI, E. (2019). Cyber security in the quantum era. Communications of the ACM, 62(4), 120–129. https://doi.org/10.1145/3241037

Summary of Parvanova’s “Explore Modern Responsive Web Design Techniques”

TO: Professor Ellis

FROM: Enmanuel Arias

DATE: September 16, 2020

SUBJECT: 500-Word Summary

This memo is a 500-word summary of the article, “Explore Modern Responsive Web Design Techniques” by Elena Parvanova, a member of the National Organizing Committee for the IEEE International Conference on Information Technologies.

29 years ago, Tim Berners-Lee created the first website that consisted of left-aligned text with blue hyperlinks on a white background. The first websites were created and managed by the IT departments of large companies. Nowadays, anyone with basic computer skills can create a website.  With the web design industry continually growing, it is important for companies to have well designed websites, as it can play a role in their success.

Web design began in 1993 with the introduction of images accompanied with text. In 1994, The World Wide Web Consortium was formed and established Hypertext Markup Language (HTML) as the standard for web design. HTML has its limitations, but the use of JavaScript resolves them. The following year, Flash and Cascading Style Sheets (CSS) were introduced. Flash became a popular tool to create more elaborate websites, but it was not search-friendly. Eventually, the combination of JavaScript and jQuery replaced the use of Flash. CSS provides a structure for designing multiple webpages. It allows websites to be created with a tableless design using percentages, known as fluid design.

With the increase of mobile devices with internet access, the layout of websites needed to adapt to the variety of screen sizes, while also keeping the design consistent across all devices. In 2007, column grid systems began to see widespread use by web designers. The most used system was the 960-grid system, with 12-column division. The system lays the content out on a 960px-wide browser window. Eventually, the fixed-width grid was replaced with percentages to align with fluid design.

Web designers had separate layouts for computers and mobile devices. Elena states that Ethan Marcotte is responsible for the birth of Responsive Web Design (RWD), who in 2010, “proposed that the same content could be used, but in different layouts and designed depending on screen size” (Parvanova, 2018, p. 3). RWD uses the viewport meta tag, grid system, and media queries to determine which layout to use when displaying content. RWD also led to the creation of responsive frameworks like Bootstrap. These frameworks standardized commonly used elements and introduced layout models like the CSS Flexbox and CSS Grid Layout.

Modern web design focuses on the organization of elements, positioning of blocks and the order of content. Flexboxes are optimized for interface design and the positioning of elements. The parent element will contain the child elements and “flex” accordingly to either fill unused space or shrink to prevent overflowing. Flexboxes were popularized because it allowed web designers to finally align elements properly. Unlike the grid layout, flexboxes are not intended to design the layout of an entire webpage. Since the grid layout is not as supported as flexboxes, a combination of the two is frequently used in RWD.

Reference

Parvanova, E. (2018). Explore Modern Responsive Web Design Techniques. Proceedings of the International Conference on Information Technologies, 43–48. Retrieved from http://infotech-bg.com/

Summary of Kiss. “The Danger of Using Artificial Intelligence in Development of Autonomous Vehicles”

TO: Prof. Ellis

From: Kevin Andiappen

DATE: Sept. 20, 2020

Subject: 500-word Summary

This is a 500-word summary of the article “THE DANGER OF USING ARTIFICIAL INTELLIGENCE IN DEVELOPMENT OF AUTONOMOUS VEHICLES,” by Gabor Kiss, which discusses the risks that come from having Artificial intelligence in automobiles.

Although self-driven cars have recently become popular, the idea has been around for years. A car that would one day be fully autonomous eliminating the need for a driver. Technology could succeed where humans fail. According to Kiss, “The expectation of spreading self-driven cars lies in the hope of significantly decreasing the 1,3 million death toll accidents world-wide, which are caused by human factor 90 % of the time” (Kiss, 2019, p. 717). In other words, the goal of self-driving cars is to decrease the number of car accidents caused by human error. This is because artificial intelligence can process data quicker than humans, which will decrease the reaction time in a situation.

At the end of November 2018, Tesla cars traveled a total of one billion miles in autonomous mode. Statistics show one accident occurs every 3 million miles. The department of transportation says there is an accident every 492,000 miles in America making self-driving cars seven times safer. The society of automotive engineers created a scale for determining the intelligence and capabilities of a vehicle. It goes from 0 to 5.

NVIDA is a company that incorporates deep learning for AI. With this technology, cars can create a lifelike, detailed interactive world to do fast calculations within seconds. There is no 100% safe solution for self-driving cars. However, using AI will come close to achieving this because it will be able to respond to traffic situations much faster than humans will. However, drivers may abuse it by cutting in front of cars intentionally forcing it to brake or going in front of them at highway entrances.

If you were to change a road closed sign to speed limit is 50 mph sign, the AI may not be able to tell which sign is legitimate which can cause an accident. This can happen to a human driver and an AI. Digital light technology works like a projector. It can shine on the road to project symbols and/or lanes. This can be used to deceive a self-driving car to follow the fake lane and cause it to crash or go to another location.

In conclusion, Artificial intelligence is a challenge for developers because it requires them to prepare for every possible scenario. The safety precautions used in self-driving cars to prevent accidents could be reprogrammed to cause accidents. All of the scenarios mentioned are one of many possible dangers that can come from self-driving cars. Developers need to be aware of these situations so that they can properly educate the AI.

References

Kiss, G. (2019). The Danger of Using Artificial Intelligence in Development of Autonomous Vehicles. Interdisciplinary Description of Complex Systems17(4), 716–722. https://dx.doi.org/10.7906/indecs.17.4.3

Summary of Golovkov et al.’s “Protecting Against Thermal Effect: Part 1: Types of Electric Arc. Professional Safety,”

TO: Professor Ellis

FROM: Michael Lin

DATE: 09/20/2020

SUBJECT: 500-word summary

This article will talk about how to protect electrician from the electric arc’s thermal hazard. The first Part of the article will talk about the different type of electric arc. Talks about their behavior and methods of thermal energy dissipation. The second Part 2 talks about how statistical are used for future improvement. The company will use the information to improve their PPE equipment but information on electric arc incident is hard to find in government statistical review. The last will talk about different way protecting from electric arc.  

During the past 15 year, the availability of different fabric and other material used in PPE help to protect electrical worker from electric arc. But the most important is studying and analysis experience, so understanding the electric arc incident data will help us improve. Then they will test the PPE equipment to make sure it will happen to protect worker from electric arc. The range that the heat generated by the electric arc are very wide, so some use of PPE alone will not provide absolute protection, and there are many factors can affect the amount of thermal energy, like the distance ,type of arc, and the equipment that the worker is wearing at the time 

Several organizations are involved in standards development and maintenance related to the electric arc safely and PPE. What is electric arc, some states that eclectic arc is a discharge of electricity from voltage, etc. Not all electric arcs in electrical equipment used in industry are the same, there are five different type of electric arcs and it is classification is based on several differentiating factors.   

The first type of electric arc is Open air electric arc, it is median or high voltage that burn in open air without any thing that cover the arc. It could be cause by bushing flashover at high and medium voltage transformer (power and instrument) or breaker.  Second type of electric are Arc in a box, and it is a low-voltage electric arc in an enclosure. It can happen in panels, motor control centers (MCC), or electrical meters. The third type of electric are Moving arc, is a medium or high-voltage arc in open air, and it is between two parallel conductors. The fourth type of electric are Ejected Arc, ejected arc is a medium- or high-voltage arc formed at the tips of parallel conductors or electrodes. This type of arc was not common but it it’s the most dangerous because it can cause large scale of burn on human skin. The last type of electric arc is Tracking arc, Tracking arc is very different from the other electric arc, it can happen on a person’s skin under their cloth when they have a direct or indirect contact with the energized part. Knowing the different type of electric are very important for electrician, and to create a safe environment for those who work in that environment. 

References 

Golovkov, M., Schau, H., & Burdge, G. (2017). Protecting Against Thermal Effect: Part 1: Types of Electric Arc. Professional Safety62(7), 49–54. 

Summary of Yuana, R. A., Leonardo, I. A., & Budiyanto, C. W. “Remote interpreter API model for supporting computer programming adaptive learning”

TO: Professor Ellis

FROM: Jinquan Tan

DATE: 9/20/2020

SUBJECT: 500-Word Summary Draft

This memo is a 500-word summary of the article, “Remote interpreter API model for supporting computer programming adaptive learning,” by Yuana, R. A., Leonardo, I. A., & Budiyanto, C. W.

Software engineering  at this time is very necessary, the preparation of skilled human resources is essential. Efforts that can be done is to develop effective learning methods, and adaptive learning is one of them. Adaptive learning technologies provide an environment that can intelligently adapt to the needs of individual learners through the presentation of appropriate information, comprehensible instructional materials, scaffolding, feedback, and recommendations based on participant characteristics and on specific situations or conditions. Adaptive learning can consist of several characteristics, namely: analytics, local, dispositional, macro, and micro. Yuana said, “computer programming learning is indispensable for many exercises and needs extra supervision from the teacher”(Yuana,2019,p.154). Students that have difficulty in making program algorithms can be solved. a teacher can guide students to learn programming by monitoring them. There are many adaptive learning models for programming learning. E-learning facilitated students’ psychomotor ability requires a capability that enables students to write program code directly into and evaluated by a particular module in the electronic learning. The adaptive learning concept to improve students psychomotor ability during online learning/teaching using commercial off-the-shelf LMS. The psychomotor interaction between students and LMS will be demonstrated by the use of adaptive learning in computer programming courses. The Proposed Model Works: “The transactions processes that occur in the web API model started from LMS server. In the LMS, the user writes program code using a code editor. Subsequently, a POST method sends the program code, together with the input-output value, and also the function name of the program code to an API caller”(Yuana,2019,p.154). How JSON structure of web API response : “ code file for later use by the interpreter along with standard input using pipe technique. Once the interpreter executes the program code the output is read by the API module”(Yuana,2019,p.155).Research method Remote interpreter web API model. : The first step is to create a web API model and the second step is to Testing the performance of the web API model. The developed web API model can be implemented with system topology by letting few clients connect to one LMS Server and then let the LMS Server connect to Web API Server. Clients need to run the program code created by them in order to work. Web API model Performance Analysis : “Once the web API model is implemented it is ready for a performance test. The test scenario was to send the program code containing a large number of looping using Python and PHP from client to web API server”(Yuana,2019,p.158).

In conclusion, web API model that serves to run the interpreter based-program code has been developed. It can support computer programming adaptive learning. Both input and response structure has been adapted to suit students’ psychomotor ability assessment in learning program code writing. It is evident that the web API module demonstrated its performance during the test.

References:

Yuana, R. A., Leonardo, I. A., & Budiyanto, C. W. (2019). Remote interpreter API model for supporting computer programming adaptive learning. Telkomnika, 17(1), 153–160. https://doi.org/10.12928/TELKOMNIKA.v17i1.11585

Summary of Sun, W., Cai, Z., Li, Y., Liu, F., Fang, S., & Wang, G. “Security and Privacy in the Medical Internet of Things: A Review. Security & Communication Networks”

TO: Professor Ellis

FROM: Adewale R. Adeyemi

DATE: 09/14/2020

SUBJECT: 500-word summary draft

“This is a memo for my 500-word summary of the article “Security and Privacy in the Medical Internet of Things”

Medical internet of things (MIoT) is a group of devices that can connect to the internet and monitor patient vital signs through wearable and implantable devices. It has been an efficient new technology for the healthcare system. It’s made up of the perception layer which collects vital data through wearables, the network layer which transmits the data collected the perception layer and the application layer which provides the interface needed by the users and also integrates the information from the other two layers.

As MIoT is been made use of extensively by more patients, security and privacy of these patient’s data cannot be taken for granted. This is also paramount to its success. Due to the amount of real-time data MIoT transmits, it is important to provide enough resources to protect patient’s security and privacy. Below is the 4 security and privacy recommendation. Data integrity, usability, auditing, and patient information are all recommendations that deal with how patient sensitive data is access and stored. Most MIoT devices have very low memory and the data that has been collected needs to be stored. cloud storage is currently been used and it as some existing solution to security and privacy requirements. Encryption: through cryptography is implemented at three levels of communication, link, node, and end-to-end encryption. Node is the most secure of the three because it does not all data transmission on plain text in the network node. Securing patient data is important but less complex algorithm needs to be utilized to reduce resources usage and have a fast transmission rate. A key transfer managed has been proposed to help tackle this problem. Authors claim, “To secure e-health communications, key management protocols play a vital role in the security process.” (Sun, Cai, Li, Liu, Wang, Fang, 2018, P 3). A lightweight key management that is strong and uses less resources is being used while a lightweight algorithms and encryption based on the problems the healthcare system is facing is being improved upon. Access control is another solution that authenticates users based on set policies to authenticate the user trying access sensitive data and it is important because patient data are shared electronically. Third party auditing is another solution. Since patient’s data are stored in the cloud, the service provider needs to be audited to know if their practices are ethical. Data anonymization is another solution which consists of sensitive patient data and identifier. K-anonymity is current being used it has it flaws which is being improved on. As technology advances, future security, and privacy challenges in MIoT will arise. Among them is insecure network (WIFI) which can be vulnerable to man in the middle attack, lightweight protocols for devices and data sharing. MIoT is still improving and more successful proposition will still be made.

References

            Sun, W., Cai, Z., Li, Y., Liu, F., Fang, S., & Wang, G. (2018). Security and Privacy in the Medical Internet of Things: A Review. Security & Communication Networks, 1–9. https://doi.org /10.1155/2018/5978636

Summary of Etzioni et al.’s “Should Artificial Intelligence Be Regulated?”

TO: Professor Ellis

FROM: Nakeita Clarke

DATE: Sept 20, 2020

SUBJECT: 500-Word Summary

This memo is a 500-word summary of the article, “Should Artificial Intelligence Be Regulated?” by Amitai Etzioni, and Oren Etzioni.

Anxiety regarding Artificial Intelligence (AI) and its potentially dangerous abilities have surfaced the question of whether or not AI should be regulated. A key component, and a first step to approach such regulation would involve standardizing a universally objective definition of AI. Some predict that it is inevitable for AI to reach the point of technological singularity and believe it will happen by 2030. This perspective is due to AI being the first emerging technology with the capability for producing intelligent technology itself, which is interpreted as a foundational threat to human existence. Respected scholars and tech leaders agree AI possesses such a threat and urge for the governance of AI. The Association for the Advancement of Artificial Intelligence (AAAI) suggests that there is no foreseeable reason to pause AI-related research while the decision to monitor AI is being determined. Others see no reason for regulation stating, “machines equipped with AI, however smart they may become, have no goals or motivations of their own.” (Etzioni, A., & Etzioni, O., 2017, p. 33). Even so, it may already be too late to attempt to create international regulations for AI due to global widespread usage across public and private sectors.

Both sides agree on the social and economic impact AI will cause; however, regulation could inflate the cost of such an impact. So far, AI has exhibited superior medical advantage, sped up search and rescue missions leading to increased chances of recovering victims, and is used in the psychological industry for effective patient care. AI is already used in our everyday technology from personal assistants; Google Assistant, Alexa, Siri, and Cortana, as well as security surveillance systems. Instead of regulating AI as a whole, limiting the progression of its beneficial impact, focusing AI regulation on AI-enabled weaponry may be a more actionable approach. Public interest in doing so exists and is evident from petitions urging the United Nations to ban weaponized AI. Existing treaty on Nuclear weapons could be an indicator that countries across the globe may adopt one for AI. In addition to such a treaty, a tiered decision-making guidance system could aid the management of AI systems. On the flip-side, what about the management of AI-powered defense, de-escalation and rescue machines in combat zones?

AI’s disruption of the job market has begun and will create an unevenness causing additional unemployment and income disparities. Despite job loss, economists believe AI will lead to the creation of new types of jobs. Having a committee to monitor AI’s impact, as well as advise on ways to combat job loss due to AI-based initiatives could mitigate social and economic threats AI presents. One can be hopeful that an almost utopian alternative to AI’s negative impact is possible if society changes its response to AI, starting with public open dialogue as the driving force for productive policies.

References

ETZIONI, A., & ETZIONI, O. (2017). Should artificial intelligence be regulated? Issues in Science & Technology, 33(4), 32–36.

Summary of Watkins and Mensah et al.’s “Peer Support and STEM Success for One African American Female Engineer”

TO: Professor J Ellis 

FROM: Brianna Persaud 

DATE: 9/19/2020

SUBJECT: 500- Word Summary  

This is a 500- word summary of the “Peer Support and STEM Success for One African American Female Engineer” by Shari Earnest Watkins and Felicia Moore Mansah, of The Journal of Negro Education. 

African Americans face hardships that other races typically don’t have to when pursuing a career related to STEM. “A handful of researchers have investigated the experiences of African American PhD Scientists and have found race to be an influential factor for persistence in their STEM careers (Brown et al., 2013; Pearson.” In the article that was assigned to my class and I, there were several studies conducted to identify these obstacles that African-Amercians, particularly African-Amercian women face. These studies were conducted by Dr. Jenkins in an effort to fight for the betterment of her race and equal opportunity. Dr. Jenkins studied as an undergraduate at an HBCU and pursued a Master’s degree immediately afterwards at graduate school. 

According to Dr. Jenkins, studying as an undergraduate was one of the best experiences of her life. Her coming to this conclusion was influenced by establishing peer relationships within her HBCU. Dr. Jenkins believes that peer relationships ultimately have the most influence on African-American women studying under STEM programs. Dr. Jenkins also goes on to state that establishing peer relationships (with same race peers in particular) assisted in building confidence, passion and companionship. She also credits much of her success to her peers that she established relationships with during her time as an undergraduate due to how close she became with them. Along with her peers, she dedicated a lot of time to studying as well. While Dr. Jenkins emphasizes the importance of establishing peer relationships in college as an African American, she also discusses how racism and race  affects those of her descent  that are particularly not in the same environment as she was during her undergraduate studies. Studies along with Dr. Jenkins Graduate school experience indicates that racism plays a significant role in determining whether African American students succeed in pursuing their degree under the STEM umbrella. Oher students that aren’t placed in that same environment as her are automatically at a disadvantage due to lack of fair treatment and equal opportunities.

As Dr. Jenkins began to talk about her experience as a graduate student, she talks about the unexpected struggle she began to face while not being in the same environment as she was as an undergraduate. Dr. Jenkins was no longer surrounded by the same peers she was before, making it incredibly hard on herself to stay motivated and to achieve the same status she once had academically as well. In her new environment, Dr. Jenkins felt isolated due to her new peers not being willing to assist her while also excluding her from a lot of experiences. Dr. Jenkins even states that during her graduate studies, her peers were very ‘cliquish’ and tended to stay within their own race. If it wasn’t for her same race peers outside of her university, ‘superstar jaheed’ in particular, she believes that she wouldn’t have been able to achieve her Masters degree.  All in all, as an African American student, racism is very prevalent in education, so comradery can alleviate that hardship while also guiding you to achieving your STEM degree. 

Reference: 

       Watkins SE, Mensah FM. Peer Support and STEM Success for One African American     Female Engineer. Journal of Negro Education. 2019;88(2):181-193. doi:10.7709/jnegroeducation.88.2.0181