Final Paper

Internet Search Engine Keywords:

  • Cyberpol = internet policing agency (Virtual authority) Commissioned by the IWIC
  • Cyber poling = The act of patrolling the internet
  • Cybertute = internet constitution (virtual freedoms)
  • Cyber law = laws governing the ****USAGE**** of the internet
  • Cyber net = Internet virtual jail
  • Cyber crime = The act of violating or committing a virtual infraction of cyber law on the internet
  • World Bill Of Rights = a clearly written list of freedoms expressed and granted to the users of the internet protected and authoritated by Cyber law
  • IWIC = International World internet Committee
  • DVIU = Department of Virtual Internet Usage
  • VL = Virtual License
  • IJPS = Internet Jurisdiction Positioning System
  • IZ = Internet zoning
  • Grid = the mapping technology used for tracking and identification of users on the net.
  • CCC = Central Control Center whose primary mandate is to use the available technologies in programming, processing, and code locking to oversee the security of the net.

We now live in a society where information and communication is more expensive than the value of a dollar.  With the dramatic increase in cyber-crime, identity theft is among the most prevalent next to copy write theft.  Then of course globally the world is affected by the war on terror.  So much so the war not only exists in a physical realm.  It’s also in a virtual realm.  These terrorist don’t even have to leave the comfort of their caves in order to recruit, manipulate, organize, and dispatch there dangerous activities due to the internet and lack thereof of its security.  Also the internet has provided a safe haven for black market activates and distribution.  The business community struggles to cope with their loss in revenue which eventually lead’s to unemployment.  By imposing structure for the internet we ensure a safer environment for it free usage.  The internet was not intended to create a new path for crime to thrive, but to increase communication and travel of information.  An analogy in comparison of the internet is the referencing of it usage and capabilities as the information super highway.  There are laws governing roads, highways and by ways, so why cant we treat the internet the same?  It will forever remain free its just in order to use it identification is required.

Conceptually the structure and or framework for organizing the Internet are derived from a virtual form of our modern day DMV.  Its premise will be focused on securing identification for its users known as a virtual license.  A virtual license will consist of not only a workstation IP address but also a personal identification code issued by DVIU (Department of Virtual Internet Usage).  This will not only ensure whose committing cyber crime (the act of violating or committing a virtual infraction of cyber law on the internet) but also their location at the time of violation.  Cyberpole is an organization commissioned by the IWIC to in short authority/ police the Internet and it’s associated cyberspace environments.  Cyberpole it self is governed by the IWIC (International World Internet committee).  The IWIC like the United Nations is a collaboration of world governments, which has set up a Cybertute ( internet constitution) and Cyber law (laws by which governs the Internet and its surrounding cyberspace. Conviction of cyber crimes warrant confinement to Cyber net (Internet virtual jail), and or revocation VL.

In order for the assignment of internet jurisdiction positioning system (IJPS) to be placed on the internet (Net) Its first broken down into zone quadrants in identification of the continents by XYZ axis.  Then for mapping purposes its color coded so that whom ever virtually travels to that region the color code attaches to their trip route like when you travel internationally each country stamps your passport.  The same will be in effect of what the passport gives international travel, will coincide with the color code.  Another important factor will be the identifying of business or personal travel to that region.  So a Class ID will be assigned at the point of registration to the DVIU whether to have access to virtual travel for business or personal use.   The assignment will benefit the specific country by the requisition of taxes once its understood that the individual was there on business not personnel use.  In addition only the country visited may request taxes for business use not the country of origin.  This will alleviate double dipping at the expense of the traveler.  The creation of a grid structure will define not only distinct paths for jurisdiction on travel but allows for each country to act as its own control tower and assumes responsibility for how the internet is being used in their country.  Another benefit to the world is this will create new jobs in order to monitor it, thus a new economical base for that region.

Zone Quadrant                                   Zone ID number             Class ID

  •    Blue = North America                     NA-5013                              *B or P
  •    Red = Europe                                    EU-2306                              *B or P
  •    Yellow = Africa                                 AF-1945                               *B or P
  •    Orange = Asia                                   AS-3864                              *B or P
  •    Green = South America                   SA-4210                              *B or P
  •    Grey = Antarctica                              AN-6653                             *B or P
  •    Purple = Australia                             AU-7981                             *B or P

gridmap

As discussed the individual user will be assigned a VL as permissions and identification to use the internet.  The way in which we would track user information is through the VL, history (acts as a foot print of travel).  One may acquire a VL for business, or personal, or both at the same time.  By the use of CPU rendering (alpha numerical calculation) and processing we will be able to pinpoint in real time when, where, and why for internet usage. Currently there is no structure identification system, nor structured accountability system on the net like this for doing business.   As discussed in the link presented below:

http://www.loc.gov/rr/business/ecommerce/

Thus we’ve created a system of democratic checks and balances devoid of violating an individuals right to surf (travel) the net. The creation of a Virtual constitution is there for necessary to ensure a users right to free surf.  This also shows a need for a system of virtual laws (Cyberlaws) to be set up to ensure a users freedoms, safety, and accountability is obeyed.

http://www.nap.edu/netsafekids/pp_li_il.html

http://www.enotes.com/internet-reference/internet-regulation

http://www.ncsl.org/issues-research/telecom/state-laws-related-to-internet-privacy.aspx

Interestingly enough the laws that are currently in effect for the governance of the net are subsequently not followed by everyone in the world community.  for  example the article below specifically speaks on this dividence and need for a world commune for the safety of our net users. They reference Europe not abiding by internet laws established by the US.  It also shows the need for the cooperation of the business community as well as other entities in order to ensure these freedoms are met.  The idea of free trade vs. fair trade comes into play when deducing the effects of governance on the the internet from the side of the business community.

http://agc-blog.agc.gov.my/agc-blog/?p=1216

In communist Peoples Republic of China they’ve imposed new regulations for identification of internet users.  They’re now requiring for internet users and providers to input there real names / identities instead of alias’s.  The new enactment will address the business community protection of commercial secrets, making it harder.  As well as websites that are viewed by communist china to be politically sensitive.  Their aim is to target internet companies to be accountable and assume more responsibility in their content and the handling of it.  The cell phone industry is the focus of these new regulations and are required to report violators back to the authorities.  As a result of these forced regulations  china has exposed a series of sexual and financial scandals that have led to the resignations or dismissals of at least 10 local officials. Thus proving the benefit and need for policing the internet.

http://www.nytimes.com/2012/12/29/world/asia/china-toughens-restrictions-on-internet-use.html?ref=internetcensorship&_r=1&

 

In attempts to prove why we should govern the net a group of students wrote a paper breaking down laws and concept for internet usage.  Knowingly classifying contributory internet groups / community and their usage by underling laws of governance to ensure free trade.

http://www.research.rutgers.edu/~ungurean/papers/communities.pdf

It has become more and more evident for the complete governance of the internet to work we will need a world consciousness united in the benefits for its purpose.  This must include but not limited to the commitment from the world business community and other entities.  This is also the reason to doubt the governance of net the will come to fruition.  Our world leaders have made it a point to stay divided either for the benefit of religious, economic, political, or trade sovereignty.  Clearly the benefits for governing the net out ways the negative impact for change in the way our society conducts itself.  We would embark on a social revolution questioning if we’re imposing censorship over freedom.  In some parts of the world where government controls everything this view of censorship would actually be welcomed were as in the US it would be deemed as the end of democracy as we know it.  The US for many years has made its mission to spread the concept of democracy around the world.  There are even some world leader that may call the US bully’s for trying to impose their political agenda on them.  By implementing Cyberpol a new world consciousness will not only emerge but will be lead by a new sociology-economic movement.

The new economy that will be developed by cyberpoling will allow those countries who once didn’t benefit from the net to implement a new revenue in the form of taxation.  It used to be for a country to show strength through its ability to export goods without being dependent on import. This socio-economic revolution can be compared to the industrial revolution of the late 1800′s, early 1900′s through out the world. This also puts a dent on the black market trade of the world especially in those third world country dependent on its illegal activities.  Another aspect of the social impact would have is the breaking down of terrorist organization.  Its well known the war on terror dependent on communications.  By disturbing their ability to communicate and organize we (the world community) effectively diminish their ability and power.

http://www.loc.gov/teachers/classroommaterials/primarysourcesets/industrial-revolution/pdf/teacher_guide.pdf

http://americanhistory.about.com/od/industrialrev/a/indrevoverview.htm

Time line

Point forward the establishment of a fully functioning constitution (Cybertute) will be created under the notion of a World Bill Of Rights a clearly written list of freedoms expressed and granted to the users of the internet protected and authoritated by Cyber law.  Of course this will be modeled after our own constitution and bill of rights but now with interest of a world consciousness and not with the specific agenda of spreading democracy.  Then the defining of Cyber laws to protect and govern the users freedoms when cyberpoling.  The establishment of zonal jurisdiction by the IJPS to define not only restrictions for that IV but for the creation of the Grid which is the mapping technology used for tracking and identification of users on the net.  To Organize the grid we’ll be using and alpha numerical algorithm system in identification of the zones and users to and fro the zone.  The mapping of travel restrictions to be imposed.  The creation of a central repository (Cyber net) to isolate and restrict virtual id’s in conveyance of the cyber laws.  All of this will be done by a Central Control Center (CCC) whose primary mandate is to use the available technologies in programming, processing, and code locking to oversee the security of the net.

Deliverable’ s

  • The creation of a constitution that consist of all known laws currently applied to internet.  The addition of new laws comparable to our current constitution, but tailored to govern the virtual environment of the internet.
  • The creation of a users Bill of Rights.  It will be a detailed understanding of the responsibilities to the user.
  • The creation of cyber laws.  This will be a detail mock up of laws governing the internet and its zone differentiations.  These laws were created by the IWIC for cyber pole to enforce
  • The creation of internet protocol. This will be a systematic framework for cyber pole in enforcing cyber law.

Tangible’s

  • In order to create the repository we will use python in blender physics engine as a means to maintaining, and isolating user identification profiles.  In programming terms there will only be one way in and one way out.
  • We will also use blender to create a 3d version of the grid to display access and zone jurisdiction.
  • Creation of zones and tracking grids.
  • Research behind how signals travel through cyberspace then attach an alpha character to act as a bug.  This is the intent behind our tracking initiatives.
  • Creation of User id’s with a requested Google API characters.  In addition to the API, a id character from the DVIU.  Also the assignment of a user pin.  A zone character would be placed on your user id.  This will act as a passport for international surfing.
  • Depending on your specific license your id may require a tax id number for the ability to use the net for eCommerce.

Journal Entry

Virtual: Constitution / Bill of Rights

 http://www.archives.gov/exhibits/charters/constitution.html

Amendment I

·         The Right for all United States Citizens to access the internet freely for personal use devoid of taxation or toll

Amendment II

·         The Right for all United States Citizens to engage in commerce while in use of the Internet.

Amendment III

·         The Right For business community while engaging in commerce to business to apply taxation for ecommerce transactions.

Amendment IV

·         The right for nations / countries to charge taxation for ecommerce activity done in their respective zones by the business communities of the world.

Amendment V

·         All copy written material /patens exposed, torrented software, downloads without permissions, subject to prosecution to the highest extent of the law.

Amendment VI

·         Any acts committed violently due to usage of the internet are subject to prosecution to the highest extent of the law. This includes but not limited to terrorism, solicitation of minors, deformations of character (bullying), solicitation for groups known to consort in violent acts, gangs, racial indecency.

Amendment VII

·         Any acts committed against the community, society, or mankind through the use of the internet are subject to prosecution.

Journal Entry I

for Wish List Project

The major resistance of governing the internet seems to be the freedom of usage but at what cost to society do we allow these freedoms when in a sense our way of life is affected and or threaten enemies home and abroad.  Many analyst think by governing the internet you diminish the very reason for its conceptualization.  Also by governing it there’s the promotion of censorship.  I argue that their needs to be more accountability and responsibility for users of the internet.  A student paper at Rutger University explained in detail why it should not be governed, but by my impression, as usual money is the real motivator behind businesses rallying for the dismissal of the topic.

http://www.salon.com/2012/12/05/conference_takes_up_how_to_govern_the_internet/

http://video.foxnews.com/v/2001189166001/governing-the-internet/

http://www.research.rutgers.edu/~ungurean/papers/communities.pdf

Invention Journal Entry I

Posted on March 5, 2013 by babyxface / Rosa Lee

In temps to combat piracy on the internet, internet carriers Verizon, Time Warner, At&t, Comcast etc. are trying to enforce what there calling a six – strike – program to discourage illegal client usage of the internet. The program consist of ISP address monitoring.

  • The first two infractions will warrant an email notification informing clients of the infraction.  A coupled with the email will be an informational attachment on copy write laws.
  • The third and forth violations warrants a splash screen where you have to follow the steps and acknowledge your illegal trading or you wont be able to continue using the internet.
  • The fifth and sixth offense warrants an email and splash page with a 14 day suspension of service.

For the internet providers to now want to truant their clients usage is a direction result of pressure from the business community for the loss in revenue suffered from their clients activities.  The real question is it legal for your service provider to monitor your usage on a free entity?  For example if your rent a car and rob a bank with it can the rental car company sue you for misuse of their property.

http://www.dslreports.com/shownews/Time-Warner-Cable-Gives-Us-Their-Six-Strike-Details-122103

http://www.ibtimes.com/how-six-strike-program-works-time-warner-comcast-att-other-isps-working-together-combat-online

http://arstechnica.com/tech-policy/2011/07/major-isps-agree-to-six-strikes-copyright-enforcement-plan/

Journal Entry II

Mid Term Entry

One of my primary goals was to show the need for cooperation from our world governments alongside the business community and other entities (social media outlets) in order for this project to work.   Its definitely possible once you display how the good out ways the bad, and as a matter of fact it has already happened with the social media revolution.  Face book is a prime example of how the world social structure has changed through the adaptation of social communication in real time.  So much so even our political leaders are campaigning through the social media arena  From marketing advertisements to political campaigning social media has embeded values in world operations.  Even at home when hurricane sandy hit people where rushing to gain internet access were ever they could.  everyday citizens are now real time reporters documenting crime all the way down to police brutality by the use of cell phones.

This revolution of technology and social structure justifies the need for more control on the net.  The proof of governments starting to either create laws or impose restrictions on internet providers and or its users displays the current trends and road leading down the path of a fully governed body for the net  It also shows proof of acknowledgment by the government conforming and confirming the social change / revolution.  This leads to the organization of the net and the reason for me creating a constitution, bill of rights, laws to fully govern the net.  once this set up we can then focus on the grid structure, jurisdiction, patrolling, and conviction of violators of the net.

Rosa Lee Journal Entry II

In researching the necessity for my project, I discovered that countries are already starting to implement their own forms of restrictions for internet usage. Case in point in the U.S. internet providers ( time warner, comcast, etc) has attempted to regulate the internet by imposing a 6 strike rule for downloading, copying writing, and pirating materials. The current  laws in the U.S. pertain to internet usage only address copy right laws and or sexual solicitation. This move by the internet company/business community show that the corporation from the internet providers can work and showing the change in social structure has to occur due to monitory loss. The U.S. is governed democratically and unfortunately other countries feel that they are imposing their political ideologically on them but are still effected by the social revolution. Case in point communist china. Although they don’t support democracy, they still suffer from the same consequences of miss usage of the internet. So much so they impose their own restrictions toward internet providers and users to provide identification and accountability of the internet providers and users. This has benefited them for it has uncovered several scandals with officials that have been involved with sexual crimes and misconduct. Therefore the need for regulation of the internet is global and our plan to create a grid structure for regulating internet will be approved by the world community because of the social revolution occurring because of the misuse of the internet.

 

Journal Entry-Rosa Lee & Ian Hodgson

Posted on May 1, 2013 by babyxface

To use Google Maps within an Android application, you must install the Google API (application programming interface), a set of tools for building software applications, in the Android SDK. By installing the Google Maps API, you can embed the Google Maps site directly into an Android application, and then overlay app-specific data on the maps. The Android Google Maps API is free for commercial use providing that the site using it is publicly accessible and does not charge for access. If the app is for public sale, you must use Google Maps API Premier, which can be accessed for a per-usage subscription fee. The classes of the Google Maps Android library offer built-in downloading, rendering, and caching of mapping tiles, as well as a variety of display options and controls. Multiple versions of the Google Maps API add-on are available, corresponding to the Android API level supported in each version. This text uses Android 4.0 Google APIs by Google Inc. You must download the add-on to your computer and install it in your SDK environment to create an Android Google Maps app. To install the Android 4.0 Google API, follow

these steps:

1. Open the Eclipse program. Click Window on the menu bar and then click Android SDK Manager to view the SDK files available. The Android SDK Manager dialog box opens with the current SDK packages listed

2.   In the Android 4.0 (API 14) category, check the Google APIs by Google Inc. check box, if it is not already installed (as indicated in the Status column). Click to remove the check mark from any other selected check boxes. Click the Install Packages button to install the Google API package. Close the Android SDK Manager after the installation.

The Android SDK Manager is updated to include the Google APIs for use with the Google Maps features.

 Adding the AVD to Target the Google API

After you install the Android Google API, you set the application’s properties to select the Google APIs add-on as the build target. Doing so sets the Android Virtual Device (AVD) Manager to use the new Google API package. Make sure to select the version (by API level) appropriate for the Google API target. To target the Google API within the AVD Manager, follow these steps:

1. Click Window on the menu bar and then click AVD Manager.

2. Click the New button. Type Google_API in the Name text box. Click the Target  button, and then click Google APIs (Google Inc.) – API Level 14.

3. Click the Create AVD button.

4. Click the Close button to close the Android Virtual Device Manager dialog box.

Obtaining a Maps API Key from Google

Before you can run an Android Google Maps application, you need to apply for a free Google Maps API key so you can integrate Google Maps into your Android application. An Android map application gives you access to Google Maps data, but Google requires that you register with the Google Maps service and agree to the Terms of Service before your mapping application can obtain data from Google Maps. This applies whether you are developing your application on the emulator or preparing your application for deployment to mobile devices.

Registering for a Google Maps API key is free. The process involves registering your computer’s MD5 fingerprint. An MD5 (Message-Digest Algorithm 5) digital fingerprint is a value included as part of a file to verify the integrity of the file. Signing up with Google to register for a Google Maps API key is a task that needs to be performed only once and the purpose is mainly for security. A unique Google Maps API key is a long string of seemingly random alphanumeric characters that may look like this:

87:B9:58:BC:6F:28:71:74:A9:32:B8:29:C2:4E:7B:02:A7:D3:7A:DD                            Certificate fingerprint (MD5): 94:1E:43:49:87:73:BB:E6:A6:88:D7:20:F1:8E:B5:98          The first step in registering for a Google Maps API key is to locate an MD5 fingerprint of the certificate used to sign your Android application. You cannot run a Google mapping application in your Eclipse Android emulator if it is not signed with your local API key. The Android installed environment contains a file named debug.keystore, which contains a unique identification. To locate the MD5 fingerprint of the debug certificate on your computer, follow these steps:

1. To generate an MD5 fingerprint of the debug certificate, first use Windows Explorer or the Finder to locate the debug.keystore file in the active AVD directory. The location of the AVD directories varies by platform:

u Windows 7 or Windows Vista: C:\Users\<user>\.android\debug.keystore

u Windows XP: C:\Documents and Settings\<user>\.android\debug.keystore

u Mac OS X: ~/.android/debug.keystore

Note: The <user> portion of this path statement indicates your user account  name on your computer. For example, using a Windows 7 computer, the location of the AVD directory on a computer with a username of Corinne is:

C:\Users\Corinne\.android\debug.keystore.

2. On a Windows 7 or Vista computer, click the Start button. Type cmd in the Search box and press the Enter key. On a Windows XP computer, click the Start button. Click Run. Type cmd and press the Enter key. On a Mac computer, on the Desktop toolbar, click the Spotlight button (upper-right corner). In the Spotlight box, type terminal and then press the Return key. To find the MD5 fingerprint of your computer, in the Command Prompt window, type the following command, replacing <user> with the name of the account:

 In Windows 7 or Vista:

keytool.exe -list -alias androiddebugkey -keystore C:\Users\<user>\.android\debug.keystore -storepass android –keypass android

In Windows XP:

keytool.exe -list -alias androiddebugkey -keystore C:\Documents and Settings\<user>\.android\debug.keystore -storepass android –keypass android

In Mac OS X:

keytool -list -keystore ~/.android/debug.keystore

Press the Enter key.

3. To select the MD5 fingerprint in Windows, right-click the Command Prompt window and then click Mark on the shortcut menu. Select the MD5 fingerprint code, being careful not to include any extra spaces

4. To copy the MD5 highlighted code, press the Ctrl+C keys (Windows) or the Command+C keys (Mac) to copy the code to the system Clipboard. The MD5 fingerprint is copied. You paste this code into a Web page in the next step.

The MD5 certificate fingerprint with Google Maps service, follow these steps:

1. Start a browser and display the following Web site:

http://developers.google.com/android/maps-api-signup

2. Scroll down the page, if necessary, and check the I have read and agree with the terms and conditions check box. Click the My certificate’s MD5 fingerprint text box and then press the Ctrl+V keys (Windows) or the Command+V keys (Mac) to paste the MD5 fingerprint code from the Command Prompt window.

3. To display the Android Maps API key, click the Generate API Key button. If necessary, enter your Gmail e-mail address and password. (You need to create a Google account if you do not have one.)

 

Tracking MAC address and Violaters that try to spoof:

First, you must ping the target. That will place the target — as long as it’s within your netmask, which it sounds like in this situation it will be — in your system’s ARP cache. Observe:

13:40 jsmith@undertow% ping 97.107.138.15 PING 97.107.138.15 (97.107.138.15) 56(84) bytes of data. 64 bytes from 97.107.138.15: icmp_seq=1 ttl=64 time=1.25 ms ^C 13:40 jsmith@undertow% arp -n 97.107.138.15 Address HWtype HWaddress Flags Mask Iface 97.107.138.15 ether fe:fd:61:6b:8a:0f C eth0

Knowing that, you do a little subprocess magic — otherwise you’re writing ARP cache

 

>>> from subprocess import Popen, PIPE

>>> IP = “1.2.3.4″ >>> # do_ping(IP)

>>> # The time between ping and arp check must be small, as ARP may not cache long

>>> pid = Popen([“arp”, “-n”, IP], stdout=PIPE)

>>> s = pid.communicate()[0]

>>> mac = re.search(r”(([a-f\d]{1,2}\:){5}[a-f\d]{1,2})”, s).groups()[0]

>>> mac "fe:fd:61:6b:8a:0f"

This is a more complex example which does an ARP ping and reports what it found with LaTeX formating.

#! /usr/bin/env python
# arping2tex : arpings a network and outputs a LaTeX table as a result

import sys
if len(sys.argv) != 2:
    print "Usage: arping2tex <net>\n  eg: arping2tex 192.168.1.0/24"
    sys.exit(1)

from scapy.all import srp,Ether,ARP,conf
conf.verb=0
ans,unans=srp(Ether(dst="ff:ff:ff:ff:ff:ff")/ARP(pdst=sys.argv[1]),
              timeout=2)

print r"\begin{tabular}{|l|l|}"
print r"\hline"
print r"MAC & IP\\"
print r"\hline"
for snd,rcv in ans:
    print rcv.sprintf(r"%Ether.src% & %ARP.psrc%\\")
print r"\hline"
print r"\end{tabular}"

 Here is another tool that will constantly monitor all interfaces on a machine and print all 
ARP request it sees, even on 802.11 frames from a Wi-Fi card in monitor mode. 
Note the store=0 parameter to sniff() to avoid storing all packets in memory for nothing.
from scapy.all import *

def arp_monitor_callback(pkt):
    if ARP in pkt and pkt[ARP].op in (1,2): #who-has or is-at
        return pkt.sprintf("%ARP.hwsrc% %ARP.psrc%")

sniff(prn=arp_monitor_callback, filter="arp", store=0)

Journal Entry

Virtual license

vl

Stage I

1.The API code for tacking.

2. The continent code for grid mapping client location.

3. The last 4 digits of the client’s social security number for identification.

4. Mac address Id

Stage II

1. Class letter identification for residential or commercial use.

2. Symbols to indicate taxation (individual or corporate), exemption, or government clearance.

Stage III

1. Embed microchip onto the back of the Virtual License.

2. Activated once Internet is accessed.

Symbols

* = Exempt

u = Taxable individual or 12 or less employees.

¤ = Taxable 12 or more employees (corporations).

v = Non-for profit exemption.

Modifications:

Originally for tracking purposes of the Virtual license we were using the Mac address to isolate the CPU in when the user acceses the internet.  After researching from our journal entry we found it to be more effective to use the individuals computer MDI fingerprint to track the CPU and user for the virtual license.  The  reasoning is that it s actually quiet easy to change/hack you MAC address at any given time. The MDI fingerprint coordinates directly with GOOGLE GPS capabilities.
Research: Philosophical

http://www.infoworld.com/d/security-central/10-building-blocks-securing-the-internet-today-165

 

During his keynote speech at RSA Conference 2011, Microsoft’s corporate VP for trustworthy computing Scott Charney called for a more cooperative approach to securing computer endpoints. The proposal is a natural maturation of Microsoft’s (my full-time employer) End-to-End Trust initiative to make the Internet significantly safer as a whole. It closely follows the plans I’ve been recommending for years; I’ve even written a whitepaper on the subject.

The most important point of this argument is that we could, today, make the Internet a much safer place to compute. All the open-standard protocols required to significantly decrease malicious attackers and malware already exist. What’s missing is the leadership and involvement from the politicians, organizations, and tech experts necessary to turn the vision into a reality.

http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=6017172&abstractAccess=no&userType=inst

This paper presents security of Internet of things. In the Internet of Things vision, every physical object has a virtual component that can produce and consume services Such extreme interconnection will bring unprecedented convenience and economy, but it will also require novel approaches to ensure its safe and ethical use. The Internet and its users are already under continual attack, and a growing economy-replete with business models that undermine the Internet’s ethical use-is fully focused on exploiting the current version’s foundational weaknesses.

Future Visualization

Where we see improvements/advancement to our project would actually be in the area of tracking potential threats.  See traditionally with the creation of the virtual id its hard for hackers to operate.  Now we would use entrapment techniques to track and capture violators by once detected false identification not to blow the whistle immediately but to allow and track their destinations, transactions etc in attempts to also capture their accomplices.  See a hackers mindset is to crack your security code.  Were purposely making it hackable.

Coding – Blender to After Effects for scene

bl_info = {
    “name”: “Export: Adobe After Effects (.jsx)”,
    “description”: “Export cameras, selected objects & camera solution 3D Markers to Adobe After Effects CS3 and above”,
    “version”: (0, 6, 3),
    “blender”: (2, 62, 0),
    “location”: “File > Export > Adobe After Effects (.jsx)”,
    “warning”: “”,
    “wiki_url”: “http://wiki.blender.org/index.php/Extensions:2.6/Py/”\
        “Scripts/Import-Export/Adobe_After_Effects”,
    “tracker_url”: “https://projects.blender.org/tracker/index.php?”\
        “func=detail&aid=29858”,
    “category”: “Import-Export”,
    }
import bpy
import datetime
from math import degrees
from mathutils import Matrix
# create list of static blender’s data
def get_comp_data(context):
    scene = context.scene
    aspect_x = scene.render.pixel_aspect_x
    aspect_y = scene.render.pixel_aspect_y
    aspect = aspect_x / aspect_y
    start = scene.frame_start
    end = scene.frame_end
    active_cam_frames = get_active_cam_for_each_frame(scene, start, end)
    fps = scene.render.fps
    return {
        ‘scn’: scene,
        ‘width’: scene.render.resolution_x,
        ‘height’: scene.render.resolution_y,
        ‘aspect’: aspect,
        ‘fps’: fps,
        ‘start’: start,
        ‘end’: end,
        ‘duration’: (end – start + 1.0) / fps,
        ‘active_cam_frames’: active_cam_frames,
        ‘curframe’: scene.frame_current,
        }
# create list of active camera for each frame in case active camera is set by markers
def get_active_cam_for_each_frame(scene, start, end):
    active_cam_frames = []
    sorted_markers = []
    markers = scene.timeline_markers
    if markers:
        for marker in markers:
            if marker.camera:
                sorted_markers.append([marker.frame, marker])
        sorted_markers = sorted(sorted_markers)
        if sorted_markers:
            for frame in range(start, end + 1):
                for m, marker in enumerate(sorted_markers):
                    if marker[0] > frame:
                        if m != 0:
                            active_cam_frames.append(sorted_markers[m – 1][1].camera)
                        else:
                            active_cam_frames.append(marker[1].camera)
                        break
                    elif m == len(sorted_markers) – 1:
                        active_cam_frames.append(marker[1].camera)
    if not active_cam_frames:
        if scene.camera:
            # in this case active_cam_frames array will have legth of 1. This will indicate that there is only one active cam in all frames
            active_cam_frames.append(scene.camera)
    return(active_cam_frames)
# create managable list of selected objects
def get_selected(context):
    cameras = [] # list of selected cameras
    solids = [] # list of all selected meshes that can be exported as AE’s solids
    lights = [] # list of all selected lamps that can be exported as AE’s lights
    nulls = [] # list of all selected objects exept cameras (will be used to create nulls in AE)
    obs = context.selected_objects
    for ob in obs:
        if ob.type == ‘CAMERA’:
            cameras.append([ob, convert_name(ob.name)])
        elif is_plane(ob):
            # not ready yet. is_plane(object) returns False in all cases. This is temporary
            solids.append([ob, convert_name(ob.name)])
        elif ob.type == ‘LAMP’:
            lights.append([ob, ob.data.type + convert_name(ob.name)]) # Type of lamp added to name
        else:
            nulls.append([ob, convert_name(ob.name)])
    selection = {
        ‘cameras’: cameras,
        ‘solids’: solids,
        ‘lights’: lights,
        ‘nulls’: nulls,
        }
    return selection
# check if object is plane and can be exported as AE’s solid
def is_plane(object):
    # work in progress. Not ready yet
    return False
# convert names of objects to avoid errors in AE.
def convert_name(name):
    name = “_” + name
    ”’
# Digits are not allowed at beginning of AE vars names.
# This section is commented, as “_” is added at beginning of names anyway.
# Placeholder for this name modification is left so that it’s not ignored if needed
if name[0].isdigit():
name = “_” + name
”’
    name = bpy.path.clean_name(name)
    name = name.replace(“-“, “_”)
    return name
# get object’s blender’s location rotation and scale and return AE’s Position, Rotation/Orientation and scale
# this function will be called for every object for every frame
def convert_transform_matrix(matrix, width, height, aspect, x_rot_correction=False):
    # get blender transform data for ob
    b_loc = matrix.to_translation()
    b_rot = matrix.to_euler(‘ZYX’) # ZYX euler matches AE’s orientation and allows to use x_rot_correction
    b_scale = matrix.to_scale()
    # convert to AE Position Rotation and Scale
    # Axes in AE are different. AE’s X is blender’s X, AE’s Y is negative Blender’s Z, AE’s Z is Blender’s Y
    x = (b_loc.x * 100.0) / aspect + width / 2.0 # calculate AE’s X position
    y = (-b_loc.z * 100.0) + (height / 2.0) # calculate AE’s Y position
    z = b_loc.y * 100.0 # calculate AE’s Z position
    # Convert rotations to match AE’s orientation.
    rx = degrees(b_rot.x) # if not x_rot_correction – AE’s X orientation = blender’s X rotation if ‘ZYX’ euler.
    ry = -degrees(b_rot.y) # AE’s Y orientation is negative blender’s Y rotation if ‘ZYX’ euler
    rz = -degrees(b_rot.z) # AE’s Z orientation is negative blender’s Z rotation if ‘ZYX’ euler
    if x_rot_correction:
        rx -= 90.0 # In blender – ob of zero rotation lay on floor. In AE layer of zero orientation “stands”
    # Convert scale to AE scale
    sx = b_scale.x * 100.0 # scale of 1.0 is 100% in AE
    sy = b_scale.z * 100.0 # scale of 1.0 is 100% in AE
    sz = b_scale.y * 100.0 # scale of 1.0 is 100% in AE
    return x, y, z, rx, ry, rz, sx, sy, sz
# get camera’s lens and convert to AE’s “zoom” value in pixels
# this function will be called for every camera for every frame
#
#
# AE’s lens is defined by “zoom” in pixels. Zoom determines focal angle or focal length.
#
# ZOOM VALUE CALCULATIONS:
#
# Given values:
# – sensor width (camera.data.sensor_width)
# – sensor height (camera.data.sensor_height)
# – sensor fit (camera.data.sensor_fit)
# – lens (blender’s lens in mm)
# – width (width of the composition/scene in pixels)
# – height (height of the composition/scene in pixels)
# – PAR (pixel aspect ratio)
#
# Calculations are made using sensor’s size and scene/comp dimension (width or height).
# If camera.sensor_fit is set to ‘AUTO’ or ‘HORIZONTAL’ – sensor = camera.data.sensor_width, dimension = width.
# If camera.sensor_fit is set to ‘VERTICAL’ – sensor = camera.data.sensor_height, dimension = height
#
# zoom can be calculated using simple proportions.
#
# |
# / |
# / |
# / | d
# s |\ / | i
# e | \ / | m
# n | \ / | e
# s | / \ | n
# o | / \ | s
# r |/ \ | i
# \ | o
# | | \ | n
# | | \ |
# | | |
# lens | zoom
#
# zoom / dimension = lens / sensor =>
# zoom = lens * dimension / sensor
#
# above is true if square pixels are used. If not – aspect compensation is needed, so final formula is:
# zoom = lens * dimension / sensor * aspect
def convert_lens(camera, width, height, aspect):
    if camera.data.sensor_fit == ‘VERTICAL’:
        sensor = camera.data.sensor_height
        dimension = height
    else:
        sensor = camera.data.sensor_width
        dimension = width
    zoom = camera.data.lens * dimension / sensor * aspect
    return zoom
# convert object bundle’s matrix. Not ready yet. Temporarily not active
#def get_ob_bundle_matrix_world(cam_matrix_world, bundle_matrix):
# matrix = cam_matrix_basis
# return matrix
# jsx script for AE creation
def write_jsx_file(file, data, selection, include_animation, include_active_cam, include_selected_cams, include_selected_objects, include_cam_bundles):
    print(“\n—————————\n- Export to After Effects -\n—————————“)
    # store the current frame to restore it at the end of export
    curframe = data[‘curframe’]
    # create array which will contain all keyframes values
    js_data = {
        ‘times’: ”,
        ‘cameras’: {},
        ‘solids’: {}, # not ready yet
        ‘lights’: {},
        ‘nulls’: {},
        ‘bundles_cam’: {},
        ‘bundles_ob’: {}, # not ready yet
        }
    # create structure for active camera/cameras
    active_cam_name = ”
    if include_active_cam and data[‘active_cam_frames’] != []:
        # check if more that one active cam exist (true if active cams set by markers)
        if len(data[‘active_cam_frames’]) is 1:
            name_ae = convert_name(data[‘active_cam_frames’][0].name) # take name of the only active camera in scene
        else:
            name_ae = ‘Active_Camera’
        active_cam_name = name_ae # store name to be used when creating keyframes for active cam.
        js_data[‘cameras’][name_ae] = {
            ‘position’: ”,
            ‘position_static’: ”,
            ‘position_anim’: False,
            ‘orientation’: ”,
            ‘orientation_static’: ”,
            ‘orientation_anim’: False,
            ‘zoom’: ”,
            ‘zoom_static’: ”,
            ‘zoom_anim’: False,
            }
    # create camera structure for selected cameras
    if include_selected_cams:
        for i, cam in enumerate(selection[‘cameras’]): # more than one camera can be selected
            if cam[1] != active_cam_name:
                name_ae = selection[‘cameras’][i][1]
                js_data[‘cameras’][name_ae] = {
                    ‘position’: ”,
                    ‘position_static’: ”,
                    ‘position_anim’: False,
                    ‘orientation’: ”,
                    ‘orientation_static’: ”,
                    ‘orientation_anim’: False,
                    ‘zoom’: ”,
                    ‘zoom_static’: ”,
                    ‘zoom_anim’: False,
                    }
    ”’
# create structure for solids. Not ready yet. Temporarily not active
for i, obj in enumerate(selection[‘solids’]):
name_ae = selection[‘solids’][i][1]
js_data[‘solids’][name_ae] = {
‘position’: ”,
‘orientation’: ”,
‘rotationX’: ”,
‘scale’: ”,
}
”’
    # create structure for lights
    for i, obj in enumerate(selection[‘lights’]):
        if include_selected_objects:
            name_ae = selection[‘lights’][i][1]
            js_data[‘lights’][name_ae] = {
                ‘type’: selection[‘lights’][i][0].data.type,
                ‘energy’: ”,
                ‘energy_static’: ”,
                ‘energy_anim’: False,
                ‘cone_angle’: ”,
                ‘cone_angle_static’: ”,
                ‘cone_angle_anim’: False,
                ‘cone_feather’: ”,
                ‘cone_feather_static’: ”,
                ‘cone_feather_anim’: False,
                ‘color’: ”,
                ‘color_static’: ”,
                ‘color_anim’: False,
                ‘position’: ”,
                ‘position_static’: ”,
                ‘position_anim’: False,
                ‘orientation’: ”,
                ‘orientation_static’: ”,
                ‘orientation_anim’: False,
                }
    # create structure for nulls
    for i, obj in enumerate(selection[‘nulls’]): # nulls representing blender’s obs except cameras, lamps and solids
        if include_selected_objects:
            name_ae = selection[‘nulls’][i][1]
            js_data[‘nulls’][name_ae] = {
                ‘position’: ”,
                ‘position_static’: ”,
                ‘position_anim’: False,
                ‘orientation’: ”,
                ‘orientation_static’: ”,
                ‘orientation_anim’: False,
                ‘scale’: ”,
                ‘scale_static’: ”,
                ‘scale_anim’: False,
                }
    # create structure for cam bundles including positions (cam bundles don’t move)
    if include_cam_bundles:
        # go through each selected camera and active cameras
        selected_cams = []
        active_cams = []
        if include_active_cam:
            active_cams = data[‘active_cam_frames’]
        if include_selected_cams:
            for cam in selection[‘cameras’]:
                selected_cams.append(cam[0])
        # list of cameras that will be checked for ‘CAMERA SOLVER’
        cams = list(set.union(set(selected_cams), set(active_cams)))
        for cam in cams:
            # go through each constraints of this camera
            for constraint in cam.constraints:
                # does the camera have a Camera Solver constraint
                if constraint.type == ‘CAMERA_SOLVER’:
                    # Which movie clip does it use
                    if constraint.use_active_clip:
                        clip = data[‘scn’].active_clip
                    else:
                        clip = constraint.clip
                    # go through each tracking point
                    for track in clip.tracking.tracks:
                        # Does this tracking point have a bundle (has its 3D position been solved)
                        if track.has_bundle:
                            # get the name of the tracker
                            name_ae = convert_name(str(cam.name) + ‘__’ + str(track.name))
                            js_data[‘bundles_cam’][name_ae] = {
                                ‘position’: ”,
                                }
                            # bundles are in camera space. Transpose to world space
                            matrix = Matrix.Translation(cam.matrix_basis.copy() * track.bundle)
                            # convert the position into AE space
                            ae_transform = convert_transform_matrix(matrix, data[‘width’], data[‘height’], data[‘aspect’], x_rot_correction=False)
                            js_data[‘bundles_cam’][name_ae][‘position’] += ‘[%f,%f,%f],’ % (ae_transform[0], ae_transform[1], ae_transform[2])
    # get all keyframes for each object and store in dico
    if include_animation:
        end = data[‘end’] + 1
    else:
        end = data[‘start’] + 1
    for frame in range(data[‘start’], end):
        print(“working on frame: ” + str(frame))
        data[‘scn’].frame_set(frame)
        # get time for this loop
        js_data[‘times’] += ‘%f ,’ % ((frame – data[‘start’]) / data[‘fps’])
        # keyframes for active camera/cameras
        if include_active_cam and data[‘active_cam_frames’] != []:
            if len(data[‘active_cam_frames’]) == 1:
                cur_cam_index = 0
            else:
                cur_cam_index = frame – data[‘start’]
            active_cam = data[‘active_cam_frames’][cur_cam_index]
            # get cam name
            name_ae = active_cam_name
            # convert cam transform properties to AE space
            ae_transform = convert_transform_matrix(active_cam.matrix_world.copy(), data[‘width’], data[‘height’], data[‘aspect’], x_rot_correction=True)
            # convert Blender’s lens to AE’s zoom in pixels
            zoom = convert_lens(active_cam, data[‘width’], data[‘height’], data[‘aspect’])
            # store all values in dico
            position = ‘[%f,%f,%f],’ % (ae_transform[0], ae_transform[1], ae_transform[2])
            orientation = ‘[%f,%f,%f],’ % (ae_transform[3], ae_transform[4], ae_transform[5])
            zoom = ‘%f,’ % (zoom)
            js_data[‘cameras’][name_ae][‘position’] += position
            js_data[‘cameras’][name_ae][‘orientation’] += orientation
            js_data[‘cameras’][name_ae][‘zoom’] += zoom
            # Check if properties change values compared to previous frame
            # If property don’t change through out the whole animation – keyframes won’t be added
            if frame != data[‘start’]:
                if position != js_data[‘cameras’][name_ae][‘position_static’]:
                    js_data[‘cameras’][name_ae][‘position_anim’] = True
                if orientation != js_data[‘cameras’][name_ae][‘orientation_static’]:
                    js_data[‘cameras’][name_ae][‘orientation_anim’] = True
                if zoom != js_data[‘cameras’][name_ae][‘zoom_static’]:
                    js_data[‘cameras’][name_ae][‘zoom_anim’] = True
            js_data[‘cameras’][name_ae][‘position_static’] = position
            js_data[‘cameras’][name_ae][‘orientation_static’] = orientation
            js_data[‘cameras’][name_ae][‘zoom_static’] = zoom
        # keyframes for selected cameras
        if include_selected_cams:
            for i, cam in enumerate(selection[‘cameras’]):
                if cam[1] != active_cam_name:
                    # get cam name
                    name_ae = selection[‘cameras’][i][1]
                    # convert cam transform properties to AE space
                    ae_transform = convert_transform_matrix(cam[0].matrix_world.copy(), data[‘width’], data[‘height’], data[‘aspect’], x_rot_correction=True)
                    # convert Blender’s lens to AE’s zoom in pixels
                    zoom = convert_lens(cam[0], data[‘width’], data[‘height’], data[‘aspect’])
                    # store all values in dico
                    position = ‘[%f,%f,%f],’ % (ae_transform[0], ae_transform[1], ae_transform[2])
                    orientation = ‘[%f,%f,%f],’ % (ae_transform[3], ae_transform[4], ae_transform[5])
                    zoom = ‘%f,’ % (zoom)
                    js_data[‘cameras’][name_ae][‘position’] += position
                    js_data[‘cameras’][name_ae][‘orientation’] += orientation
                    js_data[‘cameras’][name_ae][‘zoom’] += zoom
                    # Check if properties change values compared to previous frame
                    # If property don’t change through out the whole animation – keyframes won’t be added
                    if frame != data[‘start’]:
                        if position != js_data[‘cameras’][name_ae][‘position_static’]:
                            js_data[‘cameras’][name_ae][‘position_anim’] = True
                        if orientation != js_data[‘cameras’][name_ae][‘orientation_static’]:
                            js_data[‘cameras’][name_ae][‘orientation_anim’] = True
                        if zoom != js_data[‘cameras’][name_ae][‘zoom_static’]:
                            js_data[‘cameras’][name_ae][‘zoom_anim’] = True
                    js_data[‘cameras’][name_ae][‘position_static’] = position
                    js_data[‘cameras’][name_ae][‘orientation_static’] = orientation
                    js_data[‘cameras’][name_ae][‘zoom_static’] = zoom
        ”’
# keyframes for all solids. Not ready yet. Temporarily not active
for i, ob in enumerate(selection[‘solids’]):
#get object name
name_ae = selection[‘solids’][i][1]
#convert ob position to AE space
”’
        # keyframes for all lights.
        if include_selected_objects:
            for i, ob in enumerate(selection[‘lights’]):
                #get object name
                name_ae = selection[‘lights’][i][1]
                type = selection[‘lights’][i][0].data.type
                # convert ob transform properties to AE space
                ae_transform = convert_transform_matrix(ob[0].matrix_world.copy(), data[‘width’], data[‘height’], data[‘aspect’], x_rot_correction=True)
                color = ob[0].data.color
                # store all values in dico
                position = ‘[%f,%f,%f],’ % (ae_transform[0], ae_transform[1], ae_transform[2])
                orientation = ‘[%f,%f,%f],’ % (ae_transform[3], ae_transform[4], ae_transform[5])
                energy = ‘[%f],’ % (ob[0].data.energy * 100.0)
                color = ‘[%f,%f,%f],’ % (color[0], color[1], color[2])
                js_data[‘lights’][name_ae][‘position’] += position
                js_data[‘lights’][name_ae][‘orientation’] += orientation
                js_data[‘lights’][name_ae][‘energy’] += energy
                js_data[‘lights’][name_ae][‘color’] += color
                # Check if properties change values compared to previous frame
                # If property don’t change through out the whole animation – keyframes won’t be added
                if frame != data[‘start’]:
                    if position != js_data[‘lights’][name_ae][‘position_static’]:
                        js_data[‘lights’][name_ae][‘position_anim’] = True
                    if orientation != js_data[‘lights’][name_ae][‘orientation_static’]:
                        js_data[‘lights’][name_ae][‘orientation_anim’] = True
                    if energy != js_data[‘lights’][name_ae][‘energy_static’]:
                        js_data[‘lights’][name_ae][‘energy_anim’] = True
                    if color != js_data[‘lights’][name_ae][‘color_static’]:
                        js_data[‘lights’][name_ae][‘color_anim’] = True
                js_data[‘lights’][name_ae][‘position_static’] = position
                js_data[‘lights’][name_ae][‘orientation_static’] = orientation
                js_data[‘lights’][name_ae][‘energy_static’] = energy
                js_data[‘lights’][name_ae][‘color_static’] = color
                if type == ‘SPOT’:
                    cone_angle = ‘[%f],’ % (degrees(ob[0].data.spot_size))
                    cone_feather = ‘[%f],’ % (ob[0].data.spot_blend * 100.0)
                    js_data[‘lights’][name_ae][‘cone_angle’] += cone_angle
                    js_data[‘lights’][name_ae][‘cone_feather’] += cone_feather
                    # Check if properties change values compared to previous frame
                    # If property don’t change through out the whole animation – keyframes won’t be added
                    if frame != data[‘start’]:
                        if cone_angle != js_data[‘lights’][name_ae][‘cone_angle_static’]:
                            js_data[‘lights’][name_ae][‘cone_angle_anim’] = True
                        if orientation != js_data[‘lights’][name_ae][‘cone_feather_static’]:
                            js_data[‘lights’][name_ae][‘cone_feather_anim’] = True
                    js_data[‘lights’][name_ae][‘cone_angle_static’] = cone_angle
                    js_data[‘lights’][name_ae][‘cone_feather_static’] = cone_feather
        # keyframes for all nulls
        if include_selected_objects:
            for i, ob in enumerate(selection[‘nulls’]):
                # get object name
                name_ae = selection[‘nulls’][i][1]
                # convert ob transform properties to AE space
                ae_transform = convert_transform_matrix(ob[0].matrix_world.copy(), data[‘width’], data[‘height’], data[‘aspect’], x_rot_correction=True)
                # store all values in dico
                position = ‘[%f,%f,%f],’ % (ae_transform[0], ae_transform[1], ae_transform[2])
                orientation = ‘[%f,%f,%f],’ % (ae_transform[3], ae_transform[4], ae_transform[5])
                scale = ‘[%f,%f,%f],’ % (ae_transform[6], ae_transform[7], ae_transform[8])
                js_data[‘nulls’][name_ae][‘position’] += position
                js_data[‘nulls’][name_ae][‘orientation’] += orientation
                js_data[‘nulls’][name_ae][‘scale’] += scale
                # Check if properties change values compared to previous frame
                # If property don’t change through out the whole animation – keyframes won’t be added
                if frame != data[‘start’]:
                    if position != js_data[‘nulls’][name_ae][‘position_static’]:
                        js_data[‘nulls’][name_ae][‘position_anim’] = True
                    if orientation != js_data[‘nulls’][name_ae][‘orientation_static’]:
                        js_data[‘nulls’][name_ae][‘orientation_anim’] = True
                    if scale != js_data[‘nulls’][name_ae][‘scale_static’]:
                        js_data[‘nulls’][name_ae][‘scale_anim’] = True
                js_data[‘nulls’][name_ae][‘position_static’] = position
                js_data[‘nulls’][name_ae][‘orientation_static’] = orientation
                js_data[‘nulls’][name_ae][‘scale_static’] = scale
        # keyframes for all object bundles. Not ready yet.
        #
        #
        #
    # —- write JSX file
    jsx_file = open(file, ‘w’)
    # make the jsx executable in After Effects (enable double click on jsx)
    jsx_file.write(‘#target AfterEffects\n\n’)
    # Script’s header
    jsx_file.write(‘/**************************************\n’)
    jsx_file.write(‘Scene : %s\n’ % data[‘scn’].name)
    jsx_file.write(‘Resolution : %i x %i\n’ % (data[‘width’], data[‘height’]))
    jsx_file.write(‘Duration : %f\n’ % (data[‘duration’]))
    jsx_file.write(‘FPS : %f\n’ % (data[‘fps’]))
    jsx_file.write(‘Date : %s\n’ % datetime.datetime.now())
    jsx_file.write(‘Exported with io_export_after_effects.py\n’)
    jsx_file.write(‘**************************************/\n\n\n\n’)
    # wrap in function
    jsx_file.write(“function compFromBlender(){\n”)
    # create new comp
    jsx_file.write(‘\nvar compName = prompt(“Blender Comp\’s Name \\nEnter Name of newly created Composition”,”BlendComp”,”Composition\’s Name”);\n’)
    jsx_file.write(‘if (compName){‘) # Continue only if comp name is given. If not – terminate
    jsx_file.write(‘\nvar newComp = app.project.items.addComp(compName, %i, %i, %f, %f, %i);’ %
                   (data[‘width’], data[‘height’], data[‘aspect’], data[‘duration’], data[‘fps’]))
    jsx_file.write(‘\nnewComp.displayStartTime = %f;\n\n\n’ % ((data[‘start’] + 1.0) / data[‘fps’]))
    # create camera bundles (nulls)
    jsx_file.write(‘// ************** CAMERA 3D MARKERS **************\n\n\n’)
    for i, obj in enumerate(js_data[‘bundles_cam’]):
        name_ae = obj
        jsx_file.write(‘var %s = newComp.layers.addNull();\n’ % (name_ae))
        jsx_file.write(‘%s.threeDLayer = true;\n’ % name_ae)
        jsx_file.write(‘%s.source.name = “%s”;\n’ % (name_ae, name_ae))
        jsx_file.write(‘%s.property(“position”).setValue(%s);\n\n\n’ % (name_ae, js_data[‘bundles_cam’][obj][‘position’]))
    # create object bundles (not ready yet)
    # create objects (nulls)
    jsx_file.write(‘// ************** OBJECTS **************\n\n\n’)
    for i, obj in enumerate(js_data[‘nulls’]):
        name_ae = obj
        jsx_file.write(‘var %s = newComp.layers.addNull();\n’ % (name_ae))
        jsx_file.write(‘%s.threeDLayer = true;\n’ % name_ae)
        jsx_file.write(‘%s.source.name = “%s”;\n’ % (name_ae, name_ae))
        # Set values of properties, add kyeframes only where needed
        if include_animation and js_data[‘nulls’][name_ae][‘position_anim’]:
            jsx_file.write(‘%s.property(“position”).setValuesAtTimes([%s],[%s]);\n’ % (name_ae, js_data[‘times’], js_data[‘nulls’][obj][‘position’]))
        else:
            jsx_file.write(‘%s.property(“position”).setValue(%s);\n’ % (name_ae, js_data[‘nulls’][obj][‘position_static’]))
        if include_animation and js_data[‘nulls’][name_ae][‘orientation_anim’]:
            jsx_file.write(‘%s.property(“orientation”).setValuesAtTimes([%s],[%s]);\n’ % (name_ae, js_data[‘times’], js_data[‘nulls’][obj][‘orientation’]))
        else:
            jsx_file.write(‘%s.property(“orientation”).setValue(%s);\n’ % (name_ae, js_data[‘nulls’][obj][‘orientation_static’]))
        if include_animation and js_data[‘nulls’][name_ae][‘scale_anim’]:
            jsx_file.write(‘%s.property(“scale”).setValuesAtTimes([%s],[%s]);\n\n\n’ % (name_ae, js_data[‘times’], js_data[‘nulls’][obj][‘scale’]))
        else:
            jsx_file.write(‘%s.property(“scale”).setValue(%s);\n\n\n’ % (name_ae, js_data[‘nulls’][obj][‘scale_static’]))
    # create solids (not ready yet)
    # create lights
    jsx_file.write(‘// ************** LIGHTS **************\n\n\n’)
    for i, obj in enumerate(js_data[‘lights’]):
        name_ae = obj
        jsx_file.write(‘var %s = newComp.layers.addLight(“%s”, [0.0, 0.0]);\n’ % (name_ae, name_ae))
        jsx_file.write(‘%s.autoOrient = AutoOrientType.NO_AUTO_ORIENT;\n’ % name_ae)
        # Set values of properties, add kyeframes only where needed
        if include_animation and js_data[‘lights’][name_ae][‘position_anim’]:
            jsx_file.write(‘%s.property(“position”).setValuesAtTimes([%s],[%s]);\n’ % (name_ae, js_data[‘times’], js_data[‘lights’][obj][‘position’]))
        else:
            jsx_file.write(‘%s.property(“position”).setValue(%s);\n’ % (name_ae, js_data[‘lights’][obj][‘position_static’]))
        if include_animation and js_data[‘lights’][name_ae][‘orientation_anim’]:
            jsx_file.write(‘%s.property(“orientation”).setValuesAtTimes([%s],[%s]);\n’ % (name_ae, js_data[‘times’], js_data[‘lights’][obj][‘orientation’]))
        else:
            jsx_file.write(‘%s.property(“orientation”).setValue(%s);\n’ % (name_ae, js_data[‘lights’][obj][‘orientation_static’]))
        if include_animation and js_data[‘lights’][name_ae][‘energy_anim’]:
            jsx_file.write(‘%s.property(“intensity”).setValuesAtTimes([%s],[%s]);\n’ % (name_ae, js_data[‘times’], js_data[‘lights’][obj][‘energy’]))
        else:
            jsx_file.write(‘%s.property(“intensity”).setValue(%s);\n’ % (name_ae, js_data[‘lights’][obj][‘energy_static’]))
        if include_animation and js_data[‘lights’][name_ae][‘color_anim’]:
            jsx_file.write(‘%s.property(“Color”).setValuesAtTimes([%s],[%s]);\n’ % (name_ae, js_data[‘times’], js_data[‘lights’][obj][‘color’]))
        else:
            jsx_file.write(‘%s.property(“Color”).setValue(%s);\n’ % (name_ae, js_data[‘lights’][obj][‘color_static’]))
            if js_data[‘lights’][obj][‘type’] == ‘SPOT’:
                if include_animation and js_data[‘lights’][name_ae][‘cone_angle_anim’]:
                    jsx_file.write(‘%s.property(“Cone Angle”).setValuesAtTimes([%s],[%s]);\n’ % (name_ae, js_data[‘times’], js_data[‘lights’][obj][‘cone_angle’]))
                else:
                    jsx_file.write(‘%s.property(“Cone Angle”).setValue(%s);\n’ % (name_ae, js_data[‘lights’][obj][‘cone_angle_static’]))
                if include_animation and js_data[‘lights’][name_ae][‘cone_feather_anim’]:
                    jsx_file.write(‘%s.property(“Cone Feather”).setValuesAtTimes([%s],[%s]);\n’ % (name_ae, js_data[‘times’], js_data[‘lights’][obj][‘cone_feather’]))
                else:
                    jsx_file.write(‘%s.property(“Cone Feather”).setValue(%s);\n’ % (name_ae, js_data[‘lights’][obj][‘cone_feather_static’]))
        jsx_file.write(‘\n\n’)
    # create cameras
    jsx_file.write(‘// ************** CAMERAS **************\n\n\n’)
    for i, cam in enumerate(js_data[‘cameras’]): # more than one camera can be selected
        name_ae = cam
        jsx_file.write(‘var %s = newComp.layers.addCamera(“%s”,[0,0]);\n’ % (name_ae, name_ae))
        jsx_file.write(‘%s.autoOrient = AutoOrientType.NO_AUTO_ORIENT;\n’ % name_ae)
        # Set values of properties, add kyeframes only where needed
        if include_animation and js_data[‘cameras’][name_ae][‘position_anim’]:
            jsx_file.write(‘%s.property(“position”).setValuesAtTimes([%s],[%s]);\n’ % (name_ae, js_data[‘times’], js_data[‘cameras’][cam][‘position’]))
        else:
            jsx_file.write(‘%s.property(“position”).setValue(%s);\n’ % (name_ae, js_data[‘cameras’][cam][‘position_static’]))
        if include_animation and js_data[‘cameras’][name_ae][‘orientation_anim’]:
            jsx_file.write(‘%s.property(“orientation”).setValuesAtTimes([%s],[%s]);\n’ % (name_ae, js_data[‘times’], js_data[‘cameras’][cam][‘orientation’]))
        else:
            jsx_file.write(‘%s.property(“orientation”).setValue(%s);\n’ % (name_ae, js_data[‘cameras’][cam][‘orientation_static’]))
        if include_animation and js_data[‘cameras’][name_ae][‘zoom_anim’]:
            jsx_file.write(‘%s.property(“zoom”).setValuesAtTimes([%s],[%s]);\n\n\n’ % (name_ae, js_data[‘times’], js_data[‘cameras’][cam][‘zoom’]))
        else:
            jsx_file.write(‘%s.property(“zoom”).setValue(%s);\n\n\n’ % (name_ae, js_data[‘cameras’][cam][‘zoom_static’]))
    # Exit import if no comp name given
    jsx_file.write(‘\n}else{alert (“Exit Import Blender animation data \\nNo Comp\’s name has been chosen”,”EXIT”)};’)
    # Close function
    jsx_file.write(“}\n\n\n”)
    # Execute function. Wrap in “undo group” for easy undoing import process
    jsx_file.write(‘app.beginUndoGroup(“Import Blender animation data”);\n’)
    jsx_file.write(‘compFromBlender();\n’) # execute function
    jsx_file.write(‘app.endUndoGroup();\n\n\n’)
    jsx_file.close()
    data[‘scn’].frame_set(curframe) # set current frame of animation in blender to state before export
##########################################
# DO IT
##########################################
def main(file, context, include_animation, include_active_cam, include_selected_cams, include_selected_objects, include_cam_bundles):
    data = get_comp_data(context)
    selection = get_selected(context)
    write_jsx_file(file, data, selection, include_animation, include_active_cam, include_selected_cams, include_selected_objects, include_cam_bundles)
    print (“\nExport to After Effects Completed”)
    return {‘FINISHED’}
##########################################
# ExportJsx class register/unregister
##########################################
from bpy_extras.io_utils import ExportHelper
from bpy.props import StringProperty, BoolProperty
class ExportJsx(bpy.types.Operator, ExportHelper):
    “””Export selected cameras and objects animation to After Effects”””
    bl_idname = “export.jsx”
    bl_label = “Export to Adobe After Effects”
    filename_ext = “.jsx”
    filter_glob = StringProperty(default=”*.jsx”, options={‘HIDDEN’})
    include_animation = BoolProperty(
            name=”Animation”,
            description=”Animate Exported Cameras and Objects”,
            default=True,
            )
    include_active_cam = BoolProperty(
            name=”Active Camera”,
            description=”Include Active Camera”,
            default=True,
            )
    include_selected_cams = BoolProperty(
            name=”Selected Cameras”,
            description=”Add Selected Cameras”,
            default=True,
            )
    include_selected_objects = BoolProperty(
            name=”Selected Objects”,
            description=”Export Selected Objects”,
            default=True,
            )
    include_cam_bundles = BoolProperty(
            name=”Camera 3D Markers”,
            description=”Include 3D Markers of Camera Motion Solution for selected cameras”,
            default=True,
            )
# include_ob_bundles = BoolProperty(
# name=”Objects 3D Markers”,
# description=”Include 3D Markers of Object Motion Solution for selected cameras”,
# default=True,
# )
    def draw(self, context):
        layout = self.layout
        box = layout.box()
        box.label(‘Animation:’)
        box.prop(self, ‘include_animation’)
        box.label(‘Include Cameras and Objects:’)
        box.prop(self, ‘include_active_cam’)
        box.prop(self, ‘include_selected_cams’)
        box.prop(self, ‘include_selected_objects’)
        box.label(“Include Tracking Data:”)
        box.prop(self, ‘include_cam_bundles’)
# box.prop(self, ‘include_ob_bundles’)
    @classmethod
    def poll(cls, context):
        active = context.active_object
        selected = context.selected_objects
        camera = context.scene.camera
        ok = selected or camera
        return ok
    def execute(self, context):
        return main(self.filepath, context, self.include_animation, self.include_active_cam, self.include_selected_cams, self.include_selected_objects, self.include_cam_bundles)
def menu_func(self, context):
    self.layout.operator(ExportJsx.bl_idname, text=”Adobe After Effects (.jsx)”)
def register():
    bpy.utils.register_class(ExportJsx)
    bpy.types.INFO_MT_file_export.append(menu_func)
def unregister():
    bpy.utils.unregister_class(ExportJsx)
    bpy.types.INFO_MT_file_export.remove(menu_func)
if __name__ == “__main__”:
    register()
This entry was posted in Assignments, Final Papers, Ian Hodgson, Rosa Lee, Students. Bookmark the permalink.

One Response to Final Paper

  1. Rosa and Ian:
    This is Incredibly thorough and professional… very ambitious-covers every base.

Leave a Reply

Your email address will not be published. Required fields are marked *