This morning from about 8am-10am eastern Network Solutions services were unavailable again. At the time of this writing they still haven't come "fully" up. They explained the interruption as being caused by a "global outage" from their colocation provider. They did not explain the nature of that outage. In theory, things should start to work again over time. (Note: This is a different outage than yesterday allegedly).
Update: (12:05pm CDT) A Lesson in Business Continuity Planning
While I think the explanation is somewhat lacking on what happened at NetSol, there is one thing that jumps out at me. Why is the failure of one vendor enough to cause all of NetSol to come crashing down? You could argue that you rely on your vendors to have redundancy but sometimes the vendor itself can be a single point of failure. In this case, it looks like the vendor's entire enterprise crumbled and took NetSol with it. Even the most technologically robust firms can be brought to a halt by a labor strike (for instance). The moral of the story is that if the stakes are high enough having redundant vendors can be a smart play.
Update (4:15pm CDT) Don't Believe Everything you Read on the Internet
Contrary to reports circulating on the Internet, this outage was not the result of a DoS attack. I have spoken via email with one of the NetSol engineers and while I can't say what it is, I can say it wasn't an attack.
Apr 4th 2006
1 decade ago