You can find the website for the Research Report on The Migration of On-Premise Data Centers to Cloud Infrastructure here. It was written by Shoron Reza, Mahir Faisal, Angel Rojas, Andrew Dindyal, and Alexander Rossler.
Category: Student Projects
Research Report on Issues with Robots and Humans
You can find the website for the Research Report on Issues with Robots here. It was written by Mohammad Al Amin, Kiara Candelario, Neil Domingo, and Ali Hossain.
Kiara Candelario’s Instructional Manual for Using Oracle Live SQL
For this project, I wrote an instructional manuel on how to use Oracle Live SQL.
Research Report on Problems and Solutions for “Self-Driving Vehicle”
You can find the website for the Research Report on Problems and Solutions for “Self-Driving Vehicle” here. It was written by Pranta Dutta, Jerry Chen, Chowdhury Hashmee, Foysal Ahmed, Mateo Avila, Isaac Ajeleti.
Instruction manual
Hi everyone, this is my Instruction manual (How to make a Compass Android App)link below so have time and review my writing and let me have reviews .
How to make a Compass Android App
TO: Prof. Jason Ellis
FROM: Ali Hossain
DATE: 04/12/2021
SUBJECT: Instruction manual
How to make a Compass Android App
Ali Hossain
BTech in Computer Information Technology
New York City College of Technology
This user manual was created as a class project in ENG2575, OL88, Spring 2021.
2.0 List of Materials and Equipment Needed
3.0 Setting up the Required Permissions
4.0 Designing the GUI of the App
5.0 Writing the Main Code of the App
6.0 Building and Running the App
1.0 Introduction and Purpose
We’ll develop a simple compass app that will utilize the internal accelerometer and magnetometer sensors of the Android device. Accelerometer is a sensor which converts the mechanical acceleration information to electrical signals and similarly a magnetometer is used to translate the magnetic field intensity to electronic signals.
Most Android devices have an accelerometer and a magnetometer sensor inside therefore using a compass app only requires software rather than additional hardware.
As we develop our compass app, we’ll learn setting permissions to use sensors, reading acceleration and magnetic field data in Java code, extracting the orientation data from the sensor data and animating images. In the end, we’ll have a complete compass app that we can use in daily life.
2.0 List of Materials and Equipment Needed
- Windows/Mac/Linux Desktop/Laptop
- Minimum 8GB of RAM and Enough storage to hold all the data
- Internet Connection
- JRE (Java runtime environment) installed
- Android Studio Installed with SDK and Emulator
3.0 Setting up the Required Permissions
Let’s start by creating an Android project first in Android Studio. I named the project as Compass App and selected Empty Activity as the default activity type. The minimum API is also set to 15.
We’ll need a compass image whose needle shows the absolute north. I found the royalty free image shown in Figure 3.1 for this aim (I chose this one because it looks sort of ancient like an ancient compass). You can of course use any other image you like in your project. Please copy and paste this image to your drawable folder as a resource file to use as a UI component. The name of the image is compass.png, we’ll use its name to access it in our code.
Figure 3.1. The compass Image
If we use sensors in an Android project, we have to get the required permissions to use these sensors in the AndroidManifest.xml file which is located in the manifests folder as shown below:
Figure 3.2. The AndroidManifest file in the project explorer
Open this file by double clicking on it in Android Studio and you’ll see its default contents as shown in Figure 3.3. Please add the lines shown in Code 3.1 to this file before the <application> tag and you’ll obtain the finalized contents as shown in Code 3.2. These lines make the accelerometer and magnetometer outputs available to be used in our app.
Figure 3.3. Default contents of the AndroidManifest.xml file
Code 3.1
Code 3.2
4.0 Designing the GUI of the App
Now, let’s design the layout of the app. Please open the layout_main.xml file for this and change the text of the default Hello World TextView to Compass App which will serve as the app title. Please set its font size as 30sp and bold style. Then, please position it as follows:
Figure 4.1. The TextView used to display the title of the app
Let’s now place an ImageView in the middle of the GUI and select the compass image that we pasted to the drawable folder:
Figure 4.2. Selecting the compass image for the ImageView component
After we place the ImageView, it’ll be selected. Then, please set up its ID as iv_compass (short for ImageView_compass) from the right pane of Android Studio as follows:
Figure 4.3. Setting the ID of the compass ImageView
Finally, let’s place a TextView below the ImageView in which we’ll display the orientation angle in real time. I set its ID as tv_degrees (short for TextView_degrees), and made it 24sp with a bold text as shown below:
Figure 4.4. Adding the TextView to display the orientation angle
5.0 Writing the Main Code of the App
We completed the design of the user interface and now ready to continue with the coding. Please open the MainActivity.java file in Android Studio. This file will have the default contents as follows:
Code 5.1
The horizontal direction of a compass bearing is called as azimuth. We’ll calculate this angle from the magnetometer and accelerometer outputs. Let’s define a float type variable to hold this data:
Code 5.2
We also need to define objects related to the sensors as follows:
Code 5.3
In this code, the first object is a SensorManager object that is used to access the sensors. The other two declarations define Sensor objects for reading the outputs of the accelerometer and the magnetometer.
Finally, let’s declare ImageView and TextView objects which will be used to access the corresponding components in the GUI:
Code 5.4
We can place these declarations inside the MainActivity class just before the onCreate() method. Then, we can assign the default accelerometer and magnetometer sensors to their objects inside the onCreate() method as follows:
Code 5.5
After these declarations and assignments, the MainActivity.java file currently looks like Code 5.6.
Code 5.6
In order to continue with reading sensors, we have to implement SensorEventListener class. We do this by using the implements keyword in the main class definition as follows:
Code 5.7
Note that this is a single line code.
When we implement SensorEventListener class, Android Studio warns us by a red bulb saying that we need to implement the required methods in our code:
Figure 5.1. Warning for implementing the required methods
Please click the Implement methods and then Android Studio will automatically place the onSensorChanged() and onSensorActivityChanged() methods when we click the OK button in the dialog box:
Figure 5.2. Dialog showing the methods which will be implemented
Android Studio automatically places the following code to MainActivity.java:
Code 5.8
We’ll write our main code inside the onSensorChanged() method. However, before moving on to the main code, let’s write the onResume() and onPause() methods for the main activity because sensors are power hungry components therefore it is important to pause and resume the sensor listeners when the activity pauses and resumes. For this, we simply add the following code just below the end of the onCreate() method:
Code 5.9
In the onResume() method, the sensor listeners are registered meaning that the sensors are powered on again when the activity resumes. Similarly, the sensors are unregistered (disconnected) in the onPause() method when the activity pauses.
We’re now ready to write the main code. Firstly, let’s define two float type arrays to hold the accelerometer and magnetometer output data. These will be array variables because the outputs of these sensors are vectoral quantities i.e., they have different values for different directions.
We can define the arrays named accel_read and magnetic_read for these sensors as follows:
Code 5.10
Please write these declarations just before the onSensorChanged() method so that we can access these variables from anywhere in the onSensorChanged() method.
Inside the onSensorChanged() method: This method is called automatically when there’s a new sensor event therefore we’ll write our main code inside this method. The following code creates objects to access the ImageView and TextView of the GUI which will be updated when a sensor event happens:
Code 5.11
Then, the following code reads accelerometer and magnetometer sensors and stores the output data to accel_read and magnetic_read arrays:
Code 5.12
If the sensor outputs are available (i.e. they are not null), we’ll use the accel_read and magnetic_read variables in the method called getRotationMatrix() to get the rotation matrix R of the device as follows:
Code 5.12
If this operation is successful, the successful_read variable will be true and the rotation matrix will be stored in the variable R. In this case, we’re ready to get the azimuth angle (the angle between the device direction and the absolute north) as follows:
Code 5.13
In this code:
- A new array called orientation is declared.
- The orientation of the device is extracted using the getOrientation() method and 3-dimensional orientation data is stored in the orientation array.
- The first component of this array is the azimuth angle in radians, which is assigned to the azimuth_angle variable in the fourth line.
- In the fifth line, the azimuth angle in radians is converted to degrees and assigned to the newly created variable degrees.
- The degrees variable is of float type therefore it is better to round it to an integer. The sixth code line does this job using the method Math.round().
- Finally, the azimuth angle in integer degrees is shown in the TextView in the user interface. The char 0x00B0 is used to display the degree symbol (°).
It is also good to rotate the compass image according to the azimuth angle. For this animation, we need to declare a float type variable which will hold the current value of the ImageView’s rotation degree:
Code 5.14
Then, we can use the following animation code which will rotate the ImageView according to the azimuth angle:
Code 5.15
In this code, we declared a RotateAnimate object and then set the animation duration. The startAnimation starts the rotation of the ImageView. This code rotates the compass image in real time according to the degreesInt variable which holds the azimuth angle data.
Combining all these code lines, we reach the following MainActivity.java shown below:
Code 5.16
6.0 Building and Running the App
If we try to run the app in an emulator, the compass will constantly show the north and the azimuth angle as 0 degrees. We need to try this app on a real device with a magnetometer and accelerometer inside (most Android devices have). Please build the app in Android Studio and install it on a real device. I tried this app on Asus Zenfone and it works as expected:
Figure 6.1. Compass app running on a real device
7.0 Troubleshooting
For questions or issues related to Android and Android App development, visit https://developer.android.com/studio/troubleshoot or https://developer.android.com for lots of documentation to get help.
8.0 References
- https://developer.android.com/index.html
- https://www.udacity.com/course/android-development-for-beginners– ud837
- http://www.instructables.com/id/How-To-Create-An-Android-App- With-Android-Studio/
- Neil Smyth, Android Studio Development Essentials, CreateSpace Independent Publishing Platform, 2016.
- Sam Key, Android Programming in a Day, CreateSpace Independent Publishing Platform, 2015.
- Barry A. Burd, Android Application Development All-in-One For Dummies, For Dummies, 2015.
Expanded Definition
TO: Prof. Jason Ellis
FROM: Ali Hossain
DATE: 04/02/2021
SUBJECT: Expanded Definition of Cyber Security.
Introduction:
The purpose of this document is to discuss the history of a term for those who are studying computer system technology. The term that I am defining is “Cyber Security”. This document will explain why and how to enhance cybersecurity. Reducing model complexity, improve prediction accuracy and assess exploitability are the topic that will be explained throughout the document. Here, I am going to discuss the definitions of the term and discuss the contextual use of the term. At the end of this document, I am going to provide a working definition of the term that is relevant to the people who are studying computer system technology.
Definition:
The Oxford English Dictionary defines cybersecurity as “The state of being protected against the criminal or unauthorized use of electronic data, or the measures taken to achieve this.” Computer security, cybersecurity, or information technology security (IT security) is the protection of computer systems and networks from information disclosure, theft of or damage to their hardware, software, or electronic data, as well as from the disruption or misdirection of the services they provide. With associate degree increasing variety of users, devices, and programs within the trendy enterprise, combined with the accumulated deluge of information — a lot of that is sensitive or confidential — the importance of cybersecurity continues to grow. The growing volume and class of cyber attackers and attack techniques compound the matter even further. With an increasing variety of users, devices, and programs within the fashionable enterprise, combined with the exaggerated deluge of knowledge — a lot of that is sensitive or confidential — the importance of cybersecurity continues to grow. The growing volume and class of cyber attackers and attack techniques compound the matter even further. “In the last few years, advancement in Artificial Intelligent (AI) such as machine learning and deep learning techniques has been used to improve IoT IDS (Intrusion Detection System).” Reducing model complexity, improve prediction accuracy and assess exploitability are the topic that will be explained throughout the document. “In the last few years, advancement in Artificial Intelligent (AI) such as machine learning and deep learning techniques has been used to improve IoT IDS (Intrusion Detection System).” “Dynamic Feature Selector (DFS) uses statistical analysis and feature importance tests to reduce model complexity and improve prediction accuracy.” Using normal human selection is a lot slower and have higher feature size. Whereas dynamic feature selector is the only way to go. The energetic and intelligent highlights of programming dialects are powerful develops that software engineers regularly say as amazingly valuable. However, the capacity to adjust a program at runtime can be both a boon—in terms of flexibility—, and a curse—in terms of device back. For occasion, utilization of these features hampers the plan of sort frameworks, the precision of inactive investigation tech- neq, or the presentation of optimizations by compilers. In this paper, we perform an observational consider of a expansive Smalltalk codebase—often respected as the poster- child in terms of accessibility of these features—, in arrange to evaluate how much these features are really utilized in hone, whether a few are utilized more than others, and in which sorts of ventures. In expansion, we performed a subjective investigation of a agent test of utilizations of energetic highlights in arrange to reveal the principal reasons that drive individuals to utilize energetic highlights, and whether and how these energetic highlight utilized.
Context:
The Internet of Things has a great influence over system which have attracted a lot of cybercriminal to do malicious attack and open an end node to attack continuously. To prevent huge data loss, it is crucial to detect infiltration and intruders. Reducin0g model Complexity and improving prediction accuracy can do the work. Machine learning and Deep machine learning are helping the matter of detecting intruder. “Abstract Machine learning algorithms are becoming very efficient in intrusion detection systems with their real time response and adaptive learning process.” Statistical analysis and feature importance tests can be used to reduce model complexity and improve prediction accuracy. This is where dynamic feature selector comes to rescue. DFS showed high accuracy and reduce in feature size. “For NSL-KDD, experiments revealed an increment in accuracy from 99.54% to 99.64% while reducing feature size of one-hot encoded features from 123 to 50. In UNSW-NB15 we observed an increase in accuracy from 90.98% to 92.46% while reducing feature size from 196 to 47.” The new process is much accurate, and less feature are required for processing.
Working Definition:
Based on the definition and quotes that I discussed about the term cyber security it is related to the major computer system technology. As per my understanding, in machine learning, model complexity often refers to the number of features or terms included in each predictive model, as well as whether the chosen model is linear, nonlinear, and so on. It can also refer to the algorithmic learning complexity or computational complexity. Accuracy is defined as the percentage of correct predictions for the test data. It can be calculated easily by dividing the number of correct predictions by the number of total predictions. An exploit is any attack that takes advantage of vulnerabilities in applications, networks, operating systems, or hardware. Exploits usually take the form of software or code that aims to take control of computers or steal network data.
Reference:
Alazab.A., & Khraisat.A.(2021), Cybersecurity, A critical review of intrusion detection systems in the internet of things: techniques, deployment strategy, validation strategy, attacks, public datasets and challenges, 4, Article number: 18(2021).
Ahsan.M., Gomes.R., Chowdhury.M.M., & Nygard.K.E.(2021), Enhancing Machine Learning Prediction in Cybersecurity Using Dynamic Feature Selector, J. Cybersecur. Priv. 2021, 1(1), 199-218.
Ahsan.M., Gomes.R., Chowdhury.M.M., & Nygard.K.E.(2016), Len.oxforddictionaries.com was first indexed by Google in September 2016 Prediction in Cybersecurity Using Dynamic Feature Selector, J. Cybersecur. Priv. 2016, 1(3), 199-216.
Andrew Dindyal – Instruction on installing Windows 10 on MS Hyper-V Virtual Machine
For this project my Instruction manual is on installing Windows 10 on Hyper-V Virtual Machine. See goggle doc link below:
https://docs.google.com/document/d/e/2PACX-1vQa_VZhtKL8DKydvxA78gbBcz19wR439QUczztkXJjiD0AkgD6zJKGeDWZuzyKsZZO1V4hkJDpMwVfO/pub
Angel Rojas – Instruction set on how to set up an SSH Client on Linux and Connect via Windows
My instruction manual is based on SSH Client installation
Installing and Establishing an SSH Server Client and Connecting via Windows
Mohammad Amin’s instructional manual of How to Install Ubuntu on a MacBook Pro
I wrote instructional manual of How to install ubuntu on MacBook Pro