Archive for September, 2011

If you’ve been freelancing for long, then there’s no doubt you’ve read some of the horror stories about bad clients. You may have even run into a few bad clients in your own business.

Over the years, I have noticed that most bad clients seem to fall into certain common patterns. In this post, I share those patterns with you. Keep in mind that none of these bad client types are specific to any one client that I’ve ever worked with. Rather, these examples are a generalization of the many different characteristics a bad client can take. Personally, I rarely ever have to deal with a bad client in my business, and I’ll explain how you too can avoid them later on in the article.

Here are a few descriptions of some bad clients that you might encounter during your freelancing career:

Ten Types of Bad Clients

  1. The free samples guy. Before you can get a gig with this “client,” you have to submit an original free sample for which you will not be paid. No matter how hard you work on your sample, it is never quite up to par for this client. With a few exceptions, I think this is a scam to get free work from a freelancer. Yes, it is important for clients to see samples, but that is why we have a portfolios.
  2. The scope creep gal. The project seems relatively small, so you quote a reasonable (but fairly low) price. Once you start, however, the project changes. Ms. Scope Creep contacts you with this, “I forget to tell you, the job also includes. . .” Even when you’re done, you’re not. There’s still more work that she forget to mention. . . all for the same low price that you originally quoted.
  3. Vague Victor. Vague Victor needs something for his website and he wants you to provide that something. The trouble is, Victor is just not sure exactly what that something is. “I’ll know it when I see it,” Victor says. Unsuspecting freelancers often try to help Victor find out what is wrong. The trouble is, what the freelancer suggests is never quite what Victor had in mind.
  4. Fannie Freebie. The job description sounds like a perfect fit. What an awesome opportunity! You can’t believe your good luck. To think that they would pay someone to do such a fun project and they have approached you! Wait one minute – where’s the pay? You scan the listing eagerly, only to discover that there is no pay at all. It turns out that Fannie Freebie is not hiring freelancers at all. She’s looking for volunteers.
  5. Mr. Unavailable. You have some questions for this client so you send him an e-mail (he hasn’t left you with a phone number). Days, maybe even weeks, pass. There is still no answer from Mr. Unavailable. You’re beginning to wonder if this client is even still in business. Suddenly, without warning, he reappears. “Where’s my project?” He demands. Your questions are still unanswered.
  6. Clingy Sue. Clingy Sue is the exact opposite of Mr. Unavailable. She is so opposite, in fact, that communicating with her take up most of your working time each day. She contacts you several times every single day. She asks for a copy of your initial ideas, outlines, preliminary drafts, rough drafts, and first drafts. If you are late answering a single e-mail, Clingy Sue wonders why.
  7. Revisionist Ronnie. Accepting a job from Ronnie will keep you busy. Unfortunately, this client will not keep your pockets full. No matter how good the work is that you turn in, it is not quite good enough for Ronnie. He always has one more change request, one more fix, one final revision . . .
  8. Gossip girl. At first you might be flattered by this “client.” She seems to have the “inside scoop” on all of your competitors, and all of her competitors too. She’s more than willing to share (confidentially, of course) what she knows with you (especially when it comes to what she knows about other people). Watch out, though! Before you know it, this client will be dishing dirt out about you.
  9. The check is in the mail guy. This is the one “client” that every freelancer dreads. At first, he appears to be a normal client. Then you invoice him and his true nature comes out. Suddenly, he has all kinds of reasons not to pay you (none of which have to do with the quality of your work). He has had a family emergency. He is in a temporary cash crunch. His bank made a mistake. Whatever the excuse, you can be sure that it is not his fault. He will put the check in the mail as soon as he can. In fact, the check may already be in the mail (except that it never comes).
  10. The lowballer. Most freelancers have probably encountered this “client.” No matter what price you quote for their project, they know somebody else who will do the work for even less. “Is this your best price?” The lowballer interjects. “The market rate for this on XYZ site is .” My answer to the lowballer is always the same: this is a fair rate. If you can get the work done somewhere else cheaper, then go ahead and do it.

How To Avoid Working With Bad Clients

As I stated before, I have great clients. Part of the reason that I have such good clients is because I research each and every client before I accept an assignment from them. I search on the Internet to see if other freelancers are talking about this client. I check with the Better Business Bureau (companies that treat their own clients badly will likely treat you badly as well). I check on the client’s website.

Here are some ways you too can avoid bad clients:

  • Thoroughly research your prospective clients before working with them
  • Discuss and outline all project details before accepting a client
  • Be honest with yourself, and don’t take on new clients out of desperation
  • Follow your instincts, and don’t take on clients that give you a bad feeling
  • Watch out for catch-phrases, under or over communication, and other potential clues of a bad client

If there’s any doubt in your mind about whether the client will be a good one, then I don’t recommend you accept the project.

Cool Celtic Clouds

Posted: September 23, 2011 in Analysis

Well, it’s been quite a month of weather, as it were, from the US, across the Atlantic, through Ireland and the tail end of Hurricane Katie reached us most recently. Along this journey, it seems some interesting damage has been done and lessons learnt.

Early in August, a lightning strike in Dublin knocked out Amazon’s European cloud services. This was most interesting to me as I had heard a senior executive of the company speaking at a dinner in June and he seemed to need convincing of quite how important to the critical national infrastructure (CNI) providers such as Amazon needed to be considered, given the reliance of so many SMEs and others on their service provision. So I was keenly interested in this story about what can be learnt from the Amazon outage.

The outage lasted for more than 24 hours which, in the world of all things online, ‘everything everywhere’, is a significant portion of time to not be able to access key information or to be trading.

High availability solutions for disaster recovery are the latest trend and this is certainly where Amazon needed to be with their customer response. However, primary and secondary power supplies were knocked out in the Dublin lightning strike – the electricity supplier and the backup generator were both taken out by the strike – which seems crazy to have had such vulnerability. BCP and DR work is often about considering the absolute worst case and imagining beyond your average scenario to ensure you are prepared for all eventualities.

Amazon weren’t the only ones to be affected, obviously. But there has been such a hard sell on all things cloud in the last 12 months or so, it is a wake-up call for remembering the impact of not having the systems and services available to you and what the business impact analysis (BIA) actually comes out at once you really experience it.

Data replication is another option and needs to be considered within the realms of both the ICT issues and the records and information management requirements of each individual organisation. There are many tax, legal, regulatory and audit requirements for records retention and management and these need to be borne in mind when data is being stored, replicated or otherwise.

All of this is best encapsulated by the question: ‘how many cloud failures have to happen before consumers take notice?’ as Microsoft and OOMA suffered recently too. Having all your eggs in one basket has never been a good risk management strategy and yet there is still a headlong charge towards the white fluffy clouds without clear contract clauses, rules of engagement or service level agreements that hold sufficient water to keep you afloat in the event of a ‘disaster’. That said, I heard a law professor speaking recently who said that ‘factual functional controls matter more than contracts and labels’. Either way, people will need to know their respective responsibilities and to act accordingly. The same speaker introduced my brain to the term ‘sharding‘ in this context and a whole world of consideration with regard to encryption of data which could ultimately make data storage and handling centres immune from any requirement to comply with data protection related legislation. Such dichotomy thoughts and conundrums to be worked through…

Cloud services will continue to progress and the correlation can be drawn with electricity, when first Thomas Edison tried to ‘sell’ the notion of an always-on (scalable and elastic) service that did not require a great deal of staffed support. ‘The Cloud’, in all its forms, will continue to deliver the modern day equivalent.

I also heard about the intention to set up data-barges off shore to Ireland – choosing the wee island because its base weather is generally sufficiently colder than other known usual data centre choices. The barges are designed to generate free electricity from the water upon which they are sat and this is then re-circulated to provide the cooling. It’s all clever environmentally friendly stuff. But by the time you factor in the implication of you needing to now learn about Maritime law, you know that there’s still work to be done in this area of Cloud Computing! It is definitely becoming a case of C3 (not C3PO yet though!) – Cloud Computing Compliance.

The man next door is starting a war

Posted: September 22, 2011 in Analysis

The internet now gives those with a cause the opportunity and the tool to start a war says John Curry. John is an academic currently carrying out collaborative work with various bodies on gaming cyber war.

Before exploring the concept of individuals pitching nation-states into a war or a war-type situation, a fundamental misconception about cyber war should be challenged.

The very phrase ‘cyber warfare’ is a misnomer; it conjures up visions of government crisis teams controlling armies of hackers; making decisions about the deployment of cyber weapons against the national infrastructure of the enemy.

This misconception is propagated by books such as Richard Clark’s book Cyber War and pseudo-documentaries showing an unfolding crisis, such as the CNN Cyber Shockwave exercise.

Between 1998 and 2010, 22 attacks against national infrastructures have been reported in the public domain. However, these attacks have been practically all one way, with the defending nations only reacting to block the incoming the attacks and keep their IT systems running. There has only been one cyber war recorded in the media and that war does not even have a name.

The first cyber war

On 1 April 2001, a Chinese fighter plane collided with an American spy plane edging along Chinese airspace over the Pacific. The damaged American plane jettisoned its surveillance equipment and landed at a very surprised Chinese airfield. The Chinese fighter pilot was not recovered.

In apparent response to the incident, China defaced 500 US websites. American hackers retaliated and defaced 3,500 Chinese websites. Several months later China then released the so called Code Red Worm that infected 359,000 systems across the world at its peak. Although blamed on anonymous hackers, the attacks were all traced to government servers on both sides of the Pacific.

The first cyber war apparently ended in a draw. A footnote to history records that the American system developer, Ken Eichman, who worked out how to block the attack was rewarded by being invited to the White House for lunch. Curiously, he just worked for a publishing house.

History speaks

Discussing history in a computer magazine might seem strange, but the knowledge of the past challenges the conception that wars between countries are solely the preserve of states and governments. History is littered with examples of individuals starting wars.

In 1754, George Washington ambushed a French scouting party (an act of aggression in peacetime). It was one of the first military steps leading to the Seven Years War (1756-1763). The actions of Gordon (of Khartoum) led the British empire into an unwanted war in 1885.

The start of World War I was caused by the assassination of Arch Duke Ferdinand by the secret Serbian nationalist Black Hand Society using the new technology of easily concealed pistols. The sale of readily available internet based weapons for use by individuals or small groups has opened the possibility of nation states suddenly finding themselves in a war they did not expect or want.

Cyber tools have enabled individuals and determined groups to wage psychological attacks via the internet. The idea of such attacks is as old as history itself. An interesting example was the British suffragette movement that used the telephone to start a bogus general mobilisation in the early 1920s.

There have already been many examples of the internet being used to spreading propaganda, threatening, spreading disinformation or jamming the web by attacking internet service providers and government sites. These attacks can all cause damage and create chaos.

The potential for such attacks has been apparently accepted by the international community as the downside of a connected world.

WikiLeaks, an organisation headed by Julian Assange, publishes private, secret and classified media from anonymous sources and news leaks.

The actions of Private First Class Bradley Manning in giving over 250,000 leaked diplomatic cables to WikiLeaks had international implications. The overthrow of the presidency in Tunisia has been attributed in part to a reaction against the massive corruption revealed by the leaked cables.

The casual confirmation by American diplomatic staff of widespread endemic corruption in the Tunisian regime was the pebble that started the social avalanche that brought down the government. The social revolution in the Middle East cannot have been foreseen by the WikiLeaks group.

The Stuxnet worm, which damaged the Iranian nuclear program by controlling the so called SCADA control systems that interface between computers and machinery, has been a wake-up call. It demonstrated the possibility of launching tactical attacks on civil services like electricity, water supply, government services, banking etc. Such a successful attack could cause chaos and disorder normally associated with a major natural disaster or a war.

Strategic or ‘mega’ attacks involving very large scale sustained action against strategic national sites such as defence related, missile-control, air-traffic control, money transfer, etc. are probably outside the scope of even large criminal organisations.

However, limited tactical cyber attacks that cause considerable disruption, heavy financial loss and/or political turmoil are feasible. As demonstrated by the Russian attack on Estonia in 2007, as soon as the cash points stop working, the man on the street demands retaliation.

Know thine enemy

One of the issues in cyber warfare is working out who the aggressor actually is. The discovery of malware in the US power grid in April 2009 was believed to have come from China and Russia, but the proof was not conclusive. The cyber attack on Estonia was started by a Russian blogger who was upset by the removal of a Russian statue paying tribute to Soviet soldiers for driving the Nazis out of Estonia during World War II.

The blogger helpfully included code for conducting a denial of service attack against Estonia in their blog, which others apparently picked up and used. It is an open question how much of the subsequent cyber attack was sanctioned by the Russian state or whether the actions of the hackers were merely tolerated.

If a country is on the receiving end of an effective widespread cyber attack that affects the man on the street, then the pressure on a government for using the time honoured tradition of military action might be overwhelming. Technically minded determined individuals or small groups now have the potential to shake the world through cyber warfare. Of course, as demonstrated by WikiLeaks, the actual consequences may not be to the liking of those who started it all.

When news came through recently about the Bondi Westfield shopping centre’s new “Find my car” feature, the security and privacy implications almost jumped off the page:

“Wait – so you mean all I do is enter a number plate – any number plate – and I get back all this info about other cars parked in the centre? Whoa.”

If that statement sounds a bit liberal, read on and you’ll see just how much information Westfield is intentionally disclosing to the public.

Intended use

Let’s begin with how the app looks to the end user. This all starts out life as the Westfield malls app in the iTunes app store and for some time now, it’s been able to help you find stores in the centre. As of recently though, it has a “Parking” feature which allows you to enter a number plate and get back a series of images then receive directions on how to navigate to the one which appears to be your vehicle. Perhaps Westfield drew inspiration from Seinfeld’s The Parking Garage on this one! Here’s how it all ties together:

Westfield malls app home pageEntering a number plate to search by
Four photos of vehicles matching the search resultsDirections to the parking bay

To the casual user of the application, the number plates – and this is what I’m really talking about when I say “privacy” – appears to be indiscernible. Certainly it’s not clear from the images above but it’s also not clear after screen grabbing and expanding it:

Close up shot of a vehicle

The number plate is actually AWC11A, but we’ll get back to that.

Anyway, this is all made possible by using the Park Assist technology which puts the little guy in the image below on the roof between each park so they can both notify customers of vacant spots and snap pictures of them once they park:

Park Assist M3 camera vision system

The interesting bit though is that the implementation of this app readily exposes some fairly serious, rather extensive data that many people would probably be concerned about. And it doesn’t have to.

Under the covers

The way these smart phone apps tend to work is that when they have a dependency on external data retrieved from the internet, is they communicate backwards and forwards via services which travel over the same protocol as most of your other internet traffic – HTTP. Very often these services contain all sorts of information with only a small subset actually being exposed to the user via the application consuming the service. In Westfield’s case it was fair to assume that this service would contain some information about the vehicles matching the number plate search and what their location is.

Using a free tool like Fiddler and allowing it to act as an HTTP gateway for the iPhone, it’s easy to interpret and inspect the contents of communication between the app and the server it’s talking to. When I did this for the Westfield app, here’s what I found:

Fiddler trace when locating a vehicle

What we’re seeing here is a total of five requests made to Westfield’s server: The first one returns a JSON response which contains the data explaining the location of cars matching the search. The next four requests are for images which are pictures of the cars returned by the search. Here’s what we get:

Vehicle 1Vehicle 2Vehicle 3Vehicle 4

Apart from the slight difference in aspect ratio, this is exactly what we saw in the original app so no surprises yet. But here’s where it gets really interesting – let’s examine that JSON response. Firstly, it’s a GET request to the following address:

http://120.151.59.193/v2/bays.json?visit.plate.text=abc123~0.3&is_occupied=true&limit=4&order=-similarity

One of the nice things about a RESTful service like this is the ability to easily pass parameters in the request. In the URL above, we can see four parameters:

  1. The number plate we’re searching for appended with “~0.3”
  2. An “is_occupied” value set to “true”
  3. A “limit” set to “4”
  4. An “order” set to “-similarity”

Now when we look at the response body, we see the following:

4 JSON collections returned

What this is telling us is that the JSON response contains four collections of data. Let’s expand that first one and see what’s inside:

The contents of the JSON collection

This is a fair bit of data. Actually it’s a lot of data and it’s being sent down to your phone every time you try to locate a car. Remember, all the app needs to do is show us an image of what may be our car. But the really worrying bit is what’s inside the “visit” node; Westfield is storing and making publicly accessible the time of entry and the number plate (see the “text” field) of what appears to be every single vehicle in the centre. What’s more, it’s available as a nice little service easily consumable by anyone with the knowhow to build some basic software.

But this is only four results, right? Actually, it’s worse than that. A lot worse. That URL for the service endpoint we looked at earlier contains a number of parameters – filters, if you like – and removing these readily provides the current status of all 2,550 sensors. This includes the number plate of any car currently occupying a space and as you can see, it’s available by design to anyone:

http://120.151.59.193/v2/bays.json

You can freely request that resource over and over as many times as desired and then store the data to your heart’s content. Now that, is a privacy concern.

The impact to privacy

What this means is that anyone with some rudimentary programming knowledge can track the comings and goings of every single vehicle in one of the country’s busiest shopping centres. In an age where we’ve become surrounded by surveillance cameras we expect our movements to be monitored by the likes of centre management or security forces, but not on public display to anyone with an internet connection!

Think about the potential malicious uses if you’re able to write a simple bit of software:

  1. A stalker receives a notification when their victim enters the car park (and they’ll know exactly where the victim is parked).
  2. A suspicious husband tracks when his wife arrives and then leaves the car park.
  3. An aggrieved driver holding a grudge from a nearby road rage incident monitors for the arrival of the other party.
  4. A car thief with their eye on a particular vehicle could be notified once it is left unattended in the car park.

With Westfield standing up the service in the way they have, this becomes extremely easy. Furthermore, this is just one shopping centre out of dozens of Westfields across the country. If this practice continues, data mining the movements of individual vehicles across shopping centres will be a breeze for anyone with basic programming knowledge. And that’s really the crux of the problem in that this isn’t one of those “Oh no, the big corporation is tracking me” situations, it’s that anyone can track me.

Whilst I’m by no means a strong privacy advocate (I have a fairly open life on display through numerous channels on the web), something about this just doesn’t sit quite right with me. Certainly those people who are strong privacy advocates would object to such a public disclosure of information.

What needs to be done

Putting my “software architect / security hat” back on for a moment, the problem is simply that Westfield is exposing data this application has no need for. The best way to keep a secret is to never have it and this is where they’ve gone wrong.

The parking feature of the app is designed for only one purpose: taking a number plate from the user and returning four possible positions with grainy images of the vehicle. On this basis, every piece of data in the “visit” node in the image earlier on is totally unnecessary, as is the ability to pull back more than four records at a time and as is the ability to do it over and over again as fast as possible. All that is required is the image so that someone can visually verify it’s their car (the number plate need not be clear), and of course information on the location within the centre.

If they were to do this, the privacy risk is dramatically reduced as all you’re left with now as Joe Public is a small bunch of grainy images with indecipherable number plates. The positive feedback of the service explicitly returning the number plate (and degree of confidence in its integrity), is gone. Sure, there’s still a privacy risk in that I can manually open up the app and search for someone’s car then manually ID it, but the potential for automation is gone.

In fact most of the data returned in that service is totally unnecessary. Trimming it back would not only (largely) resolve the privacy problem, it would also reduce the size of the service hence speeding it up for the end user and reducing the bandwidth burden on Westfield. Win-win-win.

Computer security firm Trend Micro says fake digital certificates from compromised Dutch certification authority DigiNotar were part of a broad-scale man-in-the-middle attack targeting Iranian Internet users—and may have left political dissidents, activists, and others trying to bypass Iran’s online censorship regime vulnerable to eavesdropping.
DigiNotar catapulted into the news late last month when it was discovered to have issued a rogue certificate for Google.com, making it possible for third parties to carry out man-in-the-middle attacks on Google services—like Gmail—as if they were trusted and verified systems controlled by Google. Online security professionals tried to react quickly, but Trend Micro noticed something very odd about requests for domain validation through diginotar.nl: it’s a small firm that mostly serves customers in the Netherlands, so one would expect most of its domain validation requests to come from the Netherlands. And that’s true. However, beginning August 28 a significant number of Internet users requesting domain validation through DigiNotar were from Iran. No other countries saw any significant uptick in domain verification requests through DigiNotar.
The unusual spike in requests started on August 28, dropped off substantially by August 30, and was all but gone on September 2.
“These aggregated statistics [..] clearly indicate that Iranian Internet users were exposed to a large scale man-in-the-middle attack, where SSL encrypted traffic can be decrypted by a third party,” Trend Micro senior threat researcher Feike Hacquebord wrote.
Trend Micro also notes that several Web proxy systems in the United States—which are widely used by individuals wishing to access sites anonymously and without revealing their IP address or other details—were also sending Web validation requests for DigiNotar. Trend Micro speculates that these proxy services were being used by Iranian citizens seeking to work around government censorship—but the fake trust certificates would have meant their encrypted communications could have been intercepted anyway.
Trend Micro’s analysis is based on the company’s Smart Protection Network, which collects and analyzes data from Trend Micro customers around the world, including what domain names are accessed by customers at particular times.