Passwords you cannot optionally see or get back

If you are dyslexic passwords are a nightmare, typing in the correct sequence of characters without being able to see them is an absolute irritating and time consuming nightmare.
If you forget your password can you get it back, no you are given a chance to reenter a new password, but this means you have to think of yet another password and that confounds the issue over time and means you are even less likely to remember your password. Even if you remember it again when the password reset screen comes up some systems do not allow you to use old passwords again.

I remember something like fifty passwords but still have problems with some, and thinking of new ones that are hardened enough not to be cracked and remembering them is a nightmare, a nightmare, I tell you, well you probably already know.

Advertisement

Out of touch CEO’s

You have probably seen this one before but just for fun I am going to let Steve Ballmer demonstrate this one all for himself.

Lack of any connection with a product and its real usability, bugs, vulnerabilities, and problems that are just swept under the carpet. The so called “Metro” interface which has now been renamed to goodness knows what, and the lack of the start button is okay for the occasional user or for use on a touch screen machine but for most users machines and for Windows Server in particular it is totally out of touch, it should be an option.

URL space polution or “shall we buy up these domain names to see if we can make a few thousand bucks”

Basically domain name pollution or domain sitting as its also called stop legitimate use of domain names for good purposes. It should be outlawed and made illegal to hold on to a domain name for purposes of making money from it more than two years without using it. It just plain random capitalism and a misuse of the web. ICANN should be ashamed of them selves this is probably not what Tim Berners Lee intended should happen to his system.

Where is our IPv6 ?

Internet Protocol Version 6 was promised to be rolled out in the year 2012. It would have given us more IP addresses than we will ever need and much better IP facilities such as Mobile IP.

IP addresses are now like gold dust, Microsoft have reportedly bought up all the remaining ones that they can. There are a maximum of 2^32 IP addresses with our current IPv4 system as they are 32 bit or 4,294,967,296 of them. IPv6 on the other hand would give up 2^128 or 3.4 x 10^38, a very very big number.

Every person and every device could have its own IP address. Part of the reasoning in not introducing IPv6 is the overheads, see my post on MTU’s or Maximum Transmission Unit size.

HTML, CSS, and the box model, and lack of inheritance

If you have ever tried writing CSS for your HTML with lots of boxes called <div>’s you will find that the padding, borders, and margins are notoriously difficult to get right in a lot of circumstances. The model is basically what is called broken in programming parlance, its not logical, and it is not normalized. It can waste hours of time in learning HTML layout and hours when modifying and existing pure HTML/CSS web site.

The other problem with both HTML and CSS are the lack of inheritance, meaning you have keep repeating yourself and each time you make a modification you have to remember to do it in every case that you have used or worse in cryptic CSS code left by a pervious coder.

HTML and the verbosity of Markup languages

The web is all text based sequences of characters not raw binary data. It also keeps repeating information that is actually redundant. Every bit of HTML is written in what is called Markup hence the name “Hyper Text Markup Language”. Every bit of markup repeats its element name at the beginning and the end of its “section”, this is more than an overhead of 2.

There are no macro facilities for generating common parameterized sequences of HTML, and no inheritance of things like page headers, this all has to either be done on the server and retransmitted each time, or done in JavaScript which still requires retransmission or caching on the client browser.

It would be relatively easy to provide a binary format that is totally transparent on both browser and server end, by this I mean the user and web authors would not be aware of the difference and old browsers would still work.

This could be achieved utilizing techniques similar to a technology pioneered years ago in a language called ASN.1. Basically it allow binary description of data streams and provides the encoding and decoding of those streams.

So to put it in a nutshell HTML is a very inefficient way of both transmitting webpages and rendering them too as they have to be parsed from text to binary in order for the computer to understand them.

XML the cousin of HTML for the storage and transmission of data also suffers exactly the same problems.

Ethernet, MTU’s and the efficiency of the Internet

Ethernet or IEEE802.3 is the backbone of out whole worldwide internet system, used in offices, and the connections to the main backbone of the internet.

It has a variable called the MTU or Maximum Transmission Unit, which is basically the physical packet size that data is transmitted around in in the ISO OSI 7 layer model, which is a set of standards that all are communications systems are based on historically.

Anyway we live in a world on Mega bytes and Giga bytes these days, but our basic communication infrastructure is still based on a maximum MTU of 1536 bytes. Now this is a little short sighted to say the least. This is written into every bit of hardware to do with the internet, all our routers, switches, and the very chips that send and receive Ethernet communications in our very computers.

Also given the fact that each packet of IP (Internet Protocol) has several levels of headers there is also an extra overhead involved of 20 bytes plus some sometimes on every packet.

Then there are the TCP based headers for protocols like HTTP which is the thing you see at the beginning of every web address, that you probably ignore. This protocol header is text, a number of lines just as you would type it followed by a blank line followed by whatever HTML content there is. It also contains things like the dreaded cookies everyone is so worried about for tracking there surfing.

TCP stands for Transmission Control Protocol, which harps back to the days of dumb terminals and workstations which were the predecessors of our modern computers only connected back to mainframe computers rather than the internet.

All this adds up to a slow antiquated system, the MTU should be able to be at least a mega byte for streaming say video on YouTube for example, or even a modern web page.

You can calculate the average overheads, but they are significant particularly with the environment in mind. This system is just not green, I will go on to explain further why in another post about HTML…

ARPANET and the unintentional playground

ARPANET or the Internet as we now know it was release prematurely onto an unsuspecting world by the American Department of Defence (DoD), it is not fit for purpose. It was designed to be robust but fails miserably under certain conditions when attacked. Basically the routing protocols that make sure all your packets of data are directed and arrive in the right place and are reassembled by the IP (Internet Protocol) layer is open to fakery by flooding your local network with false packets, which can take over your communications by utilizing a number of techniques.

This attack can be done by a botnet which is a number of computers usually the same subnet provided by you ISP (Internet Service Provider). Windows computers are renowned for being open to being misused for attacking other computers without the users knowledge. Remember the Microsoft advert with a castle and guard dogs around a girl sitting there with a laptop on her sitting room table. Well its was all a PR job, if you watched any other the security news at the time you would have realized this was a marketing hoax.

The unsecure HTTP (Hyper Text Transfer Protocol) protocol used to deliver most HTML content is open to these sort of attacks called tear drop attacks. Basically the machines in the botnet produce crafted packets which are designed to get through and pretend to be valid packets. All they have to do is fit within what is called a window which is an area in the sequence of packets which the packet can get through within.

It only needs one of these bogus crafted packets to get through to deliver what is called an exploit which exploits a vulnerability or multiple vulnerabilities within the target computers operating system to gain control.

The target computer may actually have a number of vulnerabilities at any one time some know by Microsoft some not, there were about 20 vulnerabilities a month being published as patched over a several year period. Anyway the vulnerabilities within a computer are known as an attack surface. One such one just required a single pixel graphic called a GIF this allowed an attacker to get into the Windows machines GDI.

The whole thing resulted in an open bonanza of a lot of people termed script kiddies, although they are of varying ages, trying to hack and take control of systems. They use tools and code available on the internet not necessarily with full understanding of the code or the tools and used them to play with attacking random and selected users machines and whole networks.

Now exploits are very hard things to craft properly and a lot of badly crafted ones resulted in a lot of machine crashing apparently randomly. Remember the period of YouTube crashes well this was a teardrop attack. And the earlier Blaster and Sasser attacks that resulted in thousands of machine crashing.

I believe there are still vulnerabilities from this era of Microsoft code in Windows XP, Vista and Windows 7, there may still be some in Windows 8.

Well crafted and executed attacks on the other hand can go totally unnoticed the only thing you may notice is a slight slow down in your machine response or excessive CPU usage.

Even your so called virus protection software is easy to circumvent by an up to date what is termed a eleet (31337 in hacker speak) hacker. Keeping and working on this cutting edge is a technical art form in itself. And there are competitions like DEFCON where the NSA are no longer welcome since the Edward Snowdon witch hunt.

Okay so what do you do about this, the answer is simple for a start but no guarantee use the HTTPS protocol for all website that support it, if you look at the web address or URL in the address bar check to see there is an “https:\\” in there if there is not try inserting an ‘s’ there, if you get what is termed a 404 which is another way of saying page not found then that site does not support HTTPS. But this is no guarantee GCHQ and the NSA and any eleet hacker can still get into you computer in various ways.

Basically the internet is a playground intentionally or unintentionally released onto the public and the world. This system could have been totally secure using the correct techniques but premature release, capitalism, and the rush to get things to market precluded this.

So are the hackers or to term it correctly the crackers guilty, well some are and some are not, but who is really guilty its the American DoD, and the pressures on the IETF after hand over of ARPANET. The internets documentation is still based on what is called “Request for Comments” or RFC‘s, says it all really I read these in the 1990’s and found two of the main errors which are still there today in ARPANET, which is written into every internet device, router, and the internets backbone.

As a result we have a system that is not really fit for purpose, all thought it works, most of the time. But a lot of hacker who have tried to bring things to the attention of the authorities and Microsoft have been ignored, silenced, and even vilified.

Microsoft refused to take part in the Full Disclosure mailing list which acted as a clearance house for vulnerabilities and exploits for Open Source products and as a result there were people selling Microsoft Exploits for extortionate amounts of money. Meanwhile Linux and other open source products benefited from the many-open eyeball approach to bug and exploit discovery and fixing. Most of the time the Open Source vendors would publish there addresses and receive the exploits from the discovers who may or may not have been paid for their work, and the code fixes would then be published on the mailing list.

Microsoft wanted no part of this and as a result they still do not have a single secure operating system. Other than a research project called Singularity which was written from ground up and uses a very clever and complex model based on .NET to provide a secure system.

Well that’s all folks for now … Ethernet, MTU’s and the efficiency of the Internet will come next.