← Newer Posts Page 1 of 5

A semester in retrospect

Nov 29, 2017

As my third-to-last semester at Georgia Tech starts to wrap up, I’m left with some time to myself to reflect on what I’ve done over the last few months.

Work-Life Balance

This semester, I took 16 credit-hours of upper level classes and worked 20 hours part-time a week at my internship with GE Digital. Since September, I’ve been in a 24-hour cycle of wake up, go to class, go to work, go to class, go to sleep. The weekends consisted of 1. catching up on sleep and 2. catching up on homework. So, with few exceptions, it’s been non-stop work for me. I’m young, I can take it.

The benefit to this sort of lifestyle is that my skills were constantly being expanded every day. I’d go to class, learn something new. Go to work, learn something new. But the downside is that I only was able to learn the things presented to me in class or assigned to me at work. I had no time for personal improvement, no time to start new personal projects.

I’m used to having spare time to explore my interests. This is how I got into computer science in the first time - in my spare time in grade school, I’d be reading books about Active Server Pages (yuck) and HTML4, planning out code on notebook paper, and waiting for my fleeting moments where my parents would allow me on the computer to learn even more on the Internet. I’ve been doing variations of that for the last ten years of my life. Now that I don’t have the ability to choose what I’m learning, I found it difficult to focus on the same things, day in and day out, for 18 weeks straight.

Dealing With (or Running From) Burn-Out

burn-out (noun): physical or mental collapse caused by overwork or stress

I had a naive thought early into the semester. On August 31st, I said to myself, “I’m taking 16 credit hours this semester and I’m already feeling stressed out but I think after a couple of weeks I’ll be reacclimated to it.”

I then proceeded to have the same thought about halfway through every week - “Okay, this is stressful, but it should be over after this week!”

Turns out I was tricking myself every week, which lead to decreased motivation and burn-out as the semester wore on. It wasn’t useful to tell myself that I’d have no work left over and over again. I ended up being even more stressed because I kept having to deal with the disappointment of not being “done” with my work.

I guess the lesson I learned is that you should never look forward to being “done” because the work in life is never-ending. Instead of working like a dog every day in the pursuit of being “done”, I should have set reasonable goals for myself and allowed myself some small breaks between objectives. That would have been much less stressful.

Moving Forward

This semester has convinced me that I need to put more work into planning my life. Just taking things as they come led to a stressful stretch of life for the last few months.

My current internship ended this November so now I am on the hunt for a new one. In the mean time, I have started a small Bitcoin consulting company with some friends from Georgia State. It’s called BitCraft and we’re actively pursuing some clients already. Now that I’m freed up from GE, I can devote time to this.

I’ll still be taking 16 credit hours at Georgia Tech; however, I don’t think that they will be as stressful as the 16 hours I took this semester. I’m hoping this will allow me to focus more while also being less stressed out.

In the end, this was a long, educational semester. The end is in sight. And I mean it, for real this time…

“Okay, this is stressful, but it should be over after this week!”

A survey of crossdomain.xml vulnerabilities

Aug 15, 2014

Vulnerable crossdomain.xml files can be used by malicious people to run CSRF attacks if the victim has Flash installed on their computer. In response to a post by chs on crossdomain.xml proofs of concept and Seth Art’s real-world exploit of Bing using crossdomain.xml, I created an application in Ruby which parses the Alexa top million site list (CSV, 10MB) and scans for vulnerable crossdomain.xml files. Vulnerable here is defined as a crossdomain.xml file which permits connections from any domain name (*). It sorts the domains into four categories:

  • Unable to connect: Ruby was unable to establish a connection to the website. Interestingly enough, a significant portion of Alexa’s top million sites were inaccessible during this survey.
  • Invalid or 404: Returned 404 or the returned XML was not valid.
  • Secure: The XML returned does not contain a reference to allow-access-from domain=”*”. This does not necessarily mean that the whole crossdomain.xml file is secure, just that it is not vulnerable to the most basic of CSRF exploits.
  • Permissive: The XML returned from a GET to /crossdomain.xml does allow access from any domain.

Without further ado, let’s get into it.

The Code

I chose Ruby for this project because it has good XML processing libraries, is reasonably fast, and because I needed an excuse to practice Ruby.

require 'net/http'
require 'rexml/document'
include REXML
require 'csv'

counters = {
	'unconnect'   => 0,
	'invalid-404' => 0,
	'permissive'  => 0,
	'secure'      => 0,
	'total-count' => 0

trap 'SIGINT' do
	print counters.inspect
	exit 130

permissive = CSV.open('permissive.csv','wb')

CSV.foreach('top-1m.csv') do |row|
	counters['total-count'] += 1
	print "\n"+'Getting '+row[1]+'... '
		xd = Net::HTTP.get(row[1], '/crossdomain.xml')
		counters['unconnect'] += 1
		print 'unable to connect'
		xd = REXML::Document.new(xd)
		counters['invalid-404'] += 1
		print 'invalid xml'
	wildcard_access = false
	XPath.each(xd,'//allow-access-from') do |access|
		next unless access.attributes['domain'] == '*' # <allow-access-from domain="*">

		wildcard_access = true
		counters['permissive'] += 1
		print 'permissive'
		permissive << row
	unless wildcard_access
		counters['secure'] += 1
		print 'secure'

print counters.inspect

The Results

After 160,169 websites were inspected over the course of a few days, the script hung.

  • 3,535 (2.2%) of the websites were down at the time of the scan.
  • 84,883 (53%) of the websites had invalid or non-existent XML files at /crossdomain.xml.
  • 67,097 (41.9%) of the websites surveyed had a “secure” crossdomain.xml file.
  • 4,653 (2.9%) of the websites surveyed had insecure crossdomain.xml files.

A wildcard crossdomain.xml file is fine for certain websites, but a quick scan of the results reveals a number of banks, bitcoin websites, and popular entertainment sites (9gag and Vimeo included) with poor crossdomain.xml files. The results as a CSV with columns corresponding to the Alexa rank and the domain name.

Although a full scan of the Alexa top million was not completed, an alarmingly large number of sites have overly permissive and insecure crossdomain.xml files.

spoofident: A fake identd written in Python

Jun 21, 2014

The workhorse function of spoofident

Many protocols such as IRC require or strongly suggest the use of an ident daemon to prove that you are who you say you are, or to hold you accountable for your actions. An identd is supposed to respond to queries as to which user is using which port; however, this information can be potentially dangerous. A real identd allows attackers to gain information about your system - usernames, active ports, even a fingerprint of your active operating system. The RFC linked above even cites these vulnerabilities.

I had a need to run an ident server; however, I am wary of creating unnecessary security holes in my server. That’s why I wrote spoofident. spoofident is a daemon written in Python which provides a custom username/OS response to all incoming ident queries. It is dual-stack (meaning that it runs on both IPv4 and IPv6) and written to consume little resources, less than oidentd. I suggest using it if you are in a situation where you need to provide ident but refuse to compromise the security of your systems for that functionality.

GitHub repo for spoofident

README for spoofident

← Newer Posts Page 1 of 5