Sunday, 2 August 2015

Fixing outlook when it would not start

Trying to delete a large file in outlook (20MB) led to the window greying out and the only way to recover was to restart outlook.

After a few restarts a message came up saying Outlook could not start and pinpointed a file with a .ost suffix as  the problem

Outlook uses this file  to recreate your mailbox.  It could not be deleted as outlook was using it even after trying to stop it.

Since it would have taken longer to work out how to close the service  then reboot ( You can tell I am a UNIX guy) the solution went as follows

2. Rename the .ost file: I gave it a second .old suffix ( Did I say I was a UNIX guy :) )
3. Restart Outlook. A message flashed up saying that Outlook was being prepared for first use
4. OUtlook started with an empty inbox PANIC
5. Remove the new .ost file and remove the .old suffox from the old. .ost file
6. Restart Outlook

SUCCESS: all old mail appeared. Back in business

Your mileage may vary. This worked for me

Sunday, 26 April 2015

The problems involved securing a file

Security: a human as well as a technical problem
This note is a rumination on security that just shows that absolute security is unattainable. Some of the complexities of achieving a high level of security are presented. This level of security is only needed for highly sensitive documents and part of the ask of a secure systems architect must be to establish the level of security needed and determine whether the necessary hindrance of work is justified.
It is not suggested that this is a definitive list of the issues involved in securing a single file, let alone a system, more a consciousness raising effort.

Please consider supporting this blog financially. The money will be used  to fund research and the cost of maintenance.  You can use paypal via the button below

A file can be considered secure if
  1. CRUD (Create, Read, Update, Delete) access is available only to authorised users (Humans or processes)
  2. Only authorised users may modify a user's CRUD access
    1. As a corollary attackers may not deny authorised users access.
  3. Authorised users cannot compromise file security
The last condition is probably impossible to fulfil: anyone with a key to a safe can get at the contents of the safe.

I will have two scenarios in mind.
  1. The file is on a device (laptop, phone etc) to which a user has direct access.
  2. The file is on a server and accessible via a web service or web application.

Perimeter Security

Here the file is open to anyone who can access the device or the web application. Generally this means they need a password and username. This can be made more difficult by requiring a token or using biometrics ( and the pain of trying to register say fingerprints makes this option unattractive).
But if an attacker has physical access to the device all bets are off.
Password cracking is hard especially if there is a lockout after too many attempts. Shoulder snooping a legitimate user or using malware to install a key logger is possible but as much a matter of luck as skill. Of course the defenders may have installed a key logger to monitor access and malware may be able to get hold of those records.
But removing the hard disc, making a bit level copy and scraping all files off the copy is much easier. It also has the added bonus of getting all the files of the disc, with any valuable information they might contain.
Alternatively an attacker, having identified the location of the file could change the content on the original disc or make the fill unusable.
So why not hide the file itself on a secure server and only allow remote access? As long as no local copies can be made (for example in a browser cache) the content is secure, right? The user still needs an ID and password but extra security like a VPN tunnel and/or using the latest secure transport protocol plus two factor authentication should be enough?
The problem here is that you have to be sure the application is secure. Web Applications are not easy to secure properly, and just one flaw in the security protocol could let an attacker get hold of the data, for example via a directory traversal attack. At the worst text could be scraped from a browser screen.
To prevent man in the middle attacks you also need to ensure the file is encrypted in transit using TLS (Transport layer security) or ssh for internal access and ensure the user needs to submit a password before getting access. And make sure no plain text copies are left lying around.
The bottom line is that perimeter security, like a cylinder lock, deters only honest people. The measures here make it harder to breach security but the more valuable the data the harder attackers will try to get to it. The goal of security is to make the cost of getting the data more than the value of the data to the attacker.

Encrypting the file is an obvious next step. Legitimate users will be able to view the contents and the device can be shared with others not authorised to see the content. This is like having a locked cabinet marked “Top Secret” in the middle of an office. It tells attackers where to look for the gold.
Problems with encryption include back doors and insecure implementations not to mention ensuring the password is not inadvertently compromised (The more secure the password the more likely it is to be written down. Few people can remember passwords like sASks1029”))”!, and the problem gets worse the more such passwords people need to remember. Secure password wallets simply concentrate all passwords in a single weak spot.
For really sensitive material two users could be needed to unlock the file, each having half the password. But then backup users need to be available in case one person is ill or on vacation. And the more people know the parts of the password the more likely a breach will occur
Encryption is a useful aid to security then, but not a silver bullet. You need to be sure the algorithm is correct and securely implemented (no local copies in plain text in obscure directories) and that the users can remember the password.
Access Control
Granting or denying users access is a different kettle of fish. In a role based access system this can be delegated to users in a special administrator role and access, and the right to control access can be granted or removed by an administrator. As before there should be at least two administrators and ideally changes in access control would need second administrator approval. The question of who can create the special administrators raises the spectre of an infinite regress. This is a problem that has to be solved organisationally, though technology may help.

The Wrap
This has just scratched the surface of the problems involved in the apparently simple task of securing a file. It turns out that perimeter security deters only the honest, physical access lets attackers do whatever they want and encryption has its own minefield of problems. Finally the problems of granting and denying access need to be solved organisationally rather then technically. For really sensitive content a “four eyes” principle whereby two users must collaborate to access the content will minimise the risk of rogue users giving the content to unauthorised recipients.
The bottom line here is that total security is impossible and the amount of effort devoted to securing a document should be proportional to the value of the content and the cost of losing it or having it leaked to the wrong people. One should always have the hierarchy of secrecy in mind: Restricted, Confidential, Secret, Top Secret, List Access Only and Embarrassing and assigning content to one of these before deciding the effort needed to keep it secret.

Sunday, 21 December 2014

Basic Cyber defences: Secure cookies, Http headers and Content Security Policy

here is no such thing as total security either in real life or online. All security can do is make the cost of defeating it greater than the reward. Security measures, while not worthless, can only only reduce the risk.

For Cyber security some HTTP headers can be used to reduce the risk and are supported by all major modern browsers. A  list of useful headers is given in [2]

Three defences will reduce risk considerably

a. Setting the HttpOnly cookie flag
b. Setting the X-Xss-Protection: 1
c. Using Content-Security-Policy

None of these are magic bullets but they are valuable parts of a total security package.

The HTTP only cookie flag

The HTTP only flag is like a Yale lock. It keeps out the amateurs and the lazy bad guys, but not the determined attacker. One common cross site scripting attack is cookie theft especially a session ID  and one way to reduce this risk is to use the HTTP only flag which is supported by all modern browsers.

This ensures cookie values cannot be accessed by client side scripts e.g JavaScript.

The simple way to do this is to append “; HTTPOnly;” to the cookie value.

It is possible to get past this flag by using a combination of cross site scripting and cross site request forgery to force users to generate requests , which means attackers do not need to access the cookie. Other techniques to get round HTTP only have been considered such as using the HTTP TRACE verb. But this flag is cheap to use and stops simple attacks.

In servlet 3.0 you can configure this in web.xml as follows
< session-config>
<cookie-config >
< hTTP-only> true </ hTTP-only>
</cookie-config >
</ session-config >

Older versions of tomcat, That is the full version 7 only allow this to be set in the server.xml file
In tomcat 7.0 this attribute is enabled by default, which means your JsessionID Will be HTTP only unless you change the default behaviour in server.xml as well.

You can also programmatically add an HTTP only cookie directly to the request as follows

String cookie = "mycookie = test; secure; HTTP only";
Response.Addheader(" set-cookie", cookie);

The servlet 3.0 API is updated with the convenience method set HTTP only for adding this flag.

The Cross Site Scripting header

X-XSS-Protection is a header first created by Microsoft to block common reflected Cross site scripting. It's enabled by default in Internet Explorer Safari and chrome but not in Firefox.

There are three modes:

1)   the value of one is the default behaviour and tells the browser to modify the response to block deleted cross site scripting attacks

2) the value of zero Will disable cross site scripting protection completely

3) specifying the value of one and mode = block tells the browser to block the attack and also prevent the page rendering entirely. This means users will see only an empty browser page.
This mode should only be used after usability testing.

You can also set this header in Java
For example

X-XSS-Protection: 1
response.addheader(" X-XSS-protection","1");

X-XSS-Protection: 0
response.addheader(" X-XSS-protection","0");

X-XSS-Protection: 1; mode=block
response.addheader(" X-XSS-protection"," 1";mode=block);

It may be better to set this from the web server configuration, which will vary with the server [1] 

Content Security Policy

A content Security policy is like a higher level whitelist. The Content Security Policy (CSP) mechanism lets a site define trusted sources of content and reject content from other sources. Unfortunately this involves various restrictions which mean, for example that javascript cannot be run inline or even appear in the page, and the style in which javascript is written has to change, mainly for example, by adding event handlers, in separate files, to page elements [3] . This means CSP can be expensive to integrate and a site that uses it must ensure its developers know enough about CSP and the Javascript changes needed that they will not get confused by, for example, <button>a button </button> with no apparent handling code. .

CSP is not meant to be a frontline defence against attacks, but a defence in depth to minimise harm caused by content injection attacks [4]

CSP prevents man in the middle attacks, which are undetectable over HTTP, as well as most cross site scripting attacks, other than through user input, which can be escaped before use though with DOM injection, attacks may still be possible [3]

How To set up CSP

Include a CSP header in your response which looks like (note the colon)

    Content-Security-Policy: policy
    Content-Security-Policy-Report-Only: policy

where policy is a string of policy directives separated by semicolons.

A policy directive is a policy directive name followed by a list of urls separated by spaces
Content-Security-Policy: policy-directive_1 url1 url2....., policy directive2 url3 url3; etc

A full list of policy directives is given in [4] . The most important is default-src. The default-src directive specifies default sources for types of content. It is not obligatory but is a good idea. It is also a good idea to include the object-src directive which specifies the sites from which to accept scripts. The unsafe-in-line directive should not be used without a very good reason.

To enforce the default-src directive, the page or site must enforce the following directives:
  • object-src
  • style-src
  • img-src
  • media-src
  • frame-src
  • font-src
  • connect-src
If not specified explicitly in the policy, the directives listed above will use the default sources which may default to 'none' or 'all' depending on the browser.


1.Content-Security-Policy: default-src 
 Documents can only be accessed over https from the stated url. 

2. Content-Security-Policy: default-src 'self'; img-src *; media-src; script-src

 Content is only allowed from the document#s original host with these exceptions
  1. images can come from anywhere
  2. media can only come from  and
  3. scripts can only come from

  1. Content-Security-Policy-Report-Only: default-src ='self' report-uri=http:localhost/policyviolations.html
    Action will not be taken but violations will be reported on the specified URI and will show
        a) document-uri: the document where the attack took place,
        b) violated-directive : which directive was violated
        c) script-sample: a portion of the XSS attack
        d) line-number: the line number for debugging and research.
      4. response.setHeader(“Content-Security-Policy-Report-Only”, “default-src 'self', “report-uri http://someaccessibleuri
Note that you can have both the CSP header and the CSP-Report-Only header active so you can test only one policy at a time.

The Wrap
To reduce XSS risk
Set-cookie HTTP-only to prevent it session-hijacking
set XSS-Protection: 1
Plan to use Content-Security-Policy

The HTTP flag is not a header but setting ti will prevent a lot of common XSS attacks.

The Xss-Protection header is not supported by all browsers but is enabled by default on IE, Chrome and Firefox and prevents reflected XSS attacks

CSP is a full strength protective approach that allows definition of trusted sources for various types of contend but, since it forbids placing Javascript in a page it may break existing functionality unless extensively tested. Introduction of CSP should be regarded as a major change and planned thoroughly.


  2. Style changes needed with content Security policy

Thursday, 13 November 2014

Cyber Attacks for Beginners: Miscellaneous Attacks

Nothing to do with security, just a relaxong picture of Basel
Cross Site Scripting, Cross Site Request Forgery, SQL Injection and HTP Response Splitting are all examples of injection attacks. However everything can and has been used to attack sites one way or another. Here a number of attacks are outlined in order to highlight this fact.

Null Character Injection

In Java the null character ( in hexadecimal) is a valid character but in C/C++ it is used to terminate a string. Since C and C++ are used to write operating system interactions this could lead to vulnerabilities. Via OS injection

LDAP Injection

The Lightweight Directory Access Protocol can be attacked in a manner similar to SQL Injection, to attack LDAP repositories. While such repositories may not reap a financial reward they may hold sensitive information which can be replaced with javascript snippets for example.

OS Injection
This happens when a user supplies malicious code that interacts with the operating system. If they can get root privileges they can wipe the machine clean with a command like cd / rm -r * and it is not easy to restore or reinstall a system like this since the boot loader is not removed. An inexperienced systems administrator, or one pressured to restore service instantly, might simply bulk erase the disk thus removing all possibility of using forensic tools to find the origin of the attack or, if the system has not been backed up recently, recovering information. More subtly files could be emptied or altered in the hope the attack would not be discovered till they had been backed up a few times.

Log Injection

An attacker might enter \r\n into their input and, if this is logged, they will forge a log entry. This could be used to damage a company's case in court or to damage their reputation. Such an attack would class as an Advanced Persistent Threat, since it would only target one company at a time and would require reconnaissance.

Directory Traversal

Here the site allows users to retrieve files and an attacker uses this to get arbitrary files, for example the list of user names and passwords. Even though these are stored encrypted, once downloaded the attacker can attack the file at leisure.

XML Injection
Here xml elements contain malicious code, much like LDAP and SQL injection. Again this is a specialised attack.

Buffer Overflow

If user input is not handled safely and attacker can input a string that exceeds the capacity of the buffer designed to hold it thus overwriting other parts of the memory. With luck, skill and reconnaissance the attacker can this inject their own code into the system or simply crash the application. It is rare in Java or other managed languages, but occurs in web applications written in other languages that do not handle buffers safely.

Random Input Attack

At one time smart cards could be attacked by stressing them and feeding them random data till an input caused them to output all the data on the card. This attack could also be combined with a buffer overflow attack but seems to have fallen out of fashion, probably because makers and web application designers have learned how to defend against this sort of attack. It is probably due for revival.

Insecure Direct Object Reference

Here a direct object reference is used insecurely, for example an account number or proce is exposed to the client and the attacker can manipulate this to their benefit. The risk can be mitigated by storing the data in the browser session and using indirect references to map to the actual values. Although more complicated another approach would be to keep the actual values on the server and map the values of indirect references sent by the browser to those on the server.

The Wrap

Any part of a system or web application can serve as an attack point. A general principle of defence is a zero trust policy. Since however this has performance implications a balance needs to be struck, with, say, some code relatively lightly protected, while other code, for example safety critical code, is heavily protected, and perhaps triplicated, with the accepted output being a majority vote of all copies of the code, so if one is compromised it becomes obvious.


The references here only describe some attacks. Googling on the attack names I give will reveal a ton of links. The OWASP links are good for someone with a moderate technical knowledge of security.

  1. This also gives an example of a directory traversal attack used to retrieve the password file. The Buffer overflow section involves C, since languages like Java make this attack hard
The following links point to earlier articles in this series.

Friday, 31 October 2014

Cyber Attacks for Beginners: Http Response Splitting

What is Response splitting?

Response Splitting is quite a bit harder to understand than Cross Site Scripting, Cross Site Request Forgery or SQL Injection. It relies on the fact that:

  1. The HTTP protocol on which the Web is based is a request response protocol, that is every request must have a matching response.
  2. The elements of a response are separated by CR-LF characters

In what follows I use CRLF to denote these responses but in reality these are sent as URL Encoded values, %0d%0a

The twist to this is that the response can come before the request. This sounds insane and hopefully will, if possible, be rectified in a future version of the protocol. I do not know enough to say why the protocol does not simply require dropping of a response with no prior request

Wikipedia puts this more formally:

The attack consists of making the server print a carriage return (CR,ASCII 0x0D) line feed (LF, ASCII 0x0A) sequence followed by content supplied by the attacker in the header section of its response, typically by including them in input fields sent to the application. Per the HTTP standard (RFC 2616), headers are separated by one CRLF and the response's headers are separated from its body by two. Therefore, the failure to remove CRs and LFs allows the attacker to set arbitrary headers, take control of the body, or break the response into two or more separate responses—hence the name.

Outline of an attack
The attacker sends the following

  1. A valid request
  2. A valid but empty response
  3. A second valid response that may (will) contain malicious code
  4. a second request, shortly after the first

1 and 2 pair up as the protocol demands

3 is left dangling till the second request (4)

After the second request (4) is sent the malicious response is sent

If the computer were human it would be thinking

Ah a request. And a reponse God
A second response but no request, hang on to it
Ah, a second request, send the second reponse, and cache it for all repetitions of this request.
Job done.

Why is it Dangerous

At first site this looks insane, the attacker is sending malicious code to themselves. The attack gets really dangerous if the requests and responses are sent to a (proxy) server that caches responses. If the second request (4) is a common one everyone who sends this request is sent the poisonous response. Reference (1) gives a detailed working through an attack and how this attack can be used for Cross site scripting and Cross Site Request Forgery

The following sequence is adapted from (1)


2. Content- Length: 0 CRLF

3. Content-Type: text/htmlCRLF
Content- Length: 35CRLF
alert('Running JS on your machine')

4. Any valid request e.g
    GET /branches.html HTTP/1.1


The defence is simple

Use server side validation and disallow CRLF characters in all requests where user input is reflected in the response header.

The attacker may try to evade this with a double encoding attack that disguises the CRLF characters. If the defender scans for Encoded CRLF characters before decoding, they will be missed.

The Wrap

This attack relies on the properties of the HTTP protocol to stage an attack. It requires the requests and responses to be sent to a server that caches responses.


Tuesday, 14 October 2014

Cyber Attacks for Beginners: Cross Site Request Forgery (CSRF)

What is it
Cross site request forgery is a special case of session hijacking. When you log on to a site it starts a session for you. If you navigate away without signing you may still have a session on that site. If you then visit a malicious site the site can send you malicious code that impersonates you on the site. For example, you visit a bank and close down the page without signing off then visit a malicious site it could then send the bank a POST request to transfer money from your account to theirs without your knowledge. As I said in a previous post, they could also make it look like you are committing some sort of crime and then report it to the police.

Such an attack is not easy. It requires you to navigate away from a site without signing off and visit a malicious site before your session on the first site expires. It also requires getting past any confirmation pages the bank or other site put up. Which makes it a numbers game for the attacker. All they need is a couple of visits a day and they are in business. At no extra cost.

Why is it Dangerous
This attack becomes very dangerous when uses with Javascript and AJAX, which lest them send asynchronous Post requests without your knowledge. When combined with Cross Site Scripting it risks you machine being turned into a zombie controlled by the attacker.

Signs of vulnerability to this include accepting HTTP requests from an authenticated user without having some control to verify that the HTTP request is unique to the user's session and very long session timeouts, which increase the chance an attack is made while the session is valid.
This section outlines some basic defences against CSRF. A full set of defences is given in (1) below.

One powerful defence is to send a random secret shared with the server on each request, something an attacker cannot access and cannot guess

More precisely

A request from a Web application should include a hidden input parameter (token), with a common name such as "CSRFToken", that has a random value (which should be long), generated by a cryptographically strong Random Number Generator, whenever a new session starts. An alternative might be to use a secure hash, Base64 encoded for transmission. As randomness and uniqueness must be used in the data that is hashed to generate the random token this has little advantage over a secure Random Number generator, though local considerations might make this attractive.
The token should only be sent via POST requests and server side actions (that change state) should respond only to POST requests (this is referred to as HTTP Method Scoping). More details (lots) in (1)

Another method is double cookie Submission, where a random token is sent both as a request parameter and as a cookie. The server compares the two to check they are equal. By default the attacker cannot read any data sent by the server or modify cookie values. This is called the same origin policy and requires some skill and effort to disable in the browser (just don''t do it, right?) .

Note that any cross-site scripting (XSS) vulnerabilities (2) can be used to defeat the defences above but there are defences it cannot evade, such as Captcha, reauthentication and one-off passwords such as thos generated by RSA tokens

Do it Yourself protection
  1. Logoff immediately after using a web application (especially online banking)
  2. Do not let your browser store user names and passwords (though the risk is less if the password is stored encrypted)
  3. Using the same browser for sensitive applications and general surfing is a bad idea and leaves you dependent on the security of the site you are visiting. This may be fairly safe for online banking, but not for watching porn.
  4. Use a plugin that disables JavaScript wherever possible so an attacker cannot submit an attack unless they persuade you to submit a form manually.
  5. The above recommendations come from (1). In addition I suggest using an Incognito window, if your browser allows it, so passwords etc do not hang around in your cache.

The Wrap
There is no sure defence against any attack, since attackers and defenders both evolve their techniques and every now advance in technology brings new weaknesses and strengths. The defences outlined here may change the economics of an attack.


  1. Attacks, Defences and how to review code with CSRF in mind

Thursday, 9 October 2014

Beginners Guide to Cyber Attacks: Cross Site Scripting

Cyber Attacks for Beginners: Cross Site Scripting
Keep your eyes open when developing applications

What is it
Cross Site Scripting (XSS) occurs when a website sends untrusted malicious data, for, example HTML or Javascript, to a browser that then runs the code. A typical point of attack is when user input is reflected back to the user, for example when you input your name and the next page says “Welcome “ followed by your name. (examples are given in the links at the bottom of this article)

There are three types of cross site scripting

  1. Reflected XSSwhere malicious data is embedded in the page that is returned to the browser immediately following the request. One example of this is where an attacker tricks a victim into loading a url containing malicious code into their browser. This is sent to a legitimate server and a response containing the malicious code is sent to the victim's browser where it is executed, perhaps sending the victim to the attacker's site where an effort may be made to rob them
  2. Stored XSSwhere malicious script an attacker previously managed to get stored on a server is sent to all users at some later time
  3. DOM based XSSwhere malicious code is injected into the pages DOM.

Reflected and persistent XSS assume that the payload moves from the browser to the server and back: if it goes back to the same browser it is reflected, if it goes to different browsers it is stored. DOM based XSS does not have this limitation.

Why is it Dangerous
XSS is dangerous because the code injected into the browser could do almost anything. The response could contain malicious code invisible to the user (DOM based Cross Site Scripting) that is then executed. The possibilities are endless. The page could display an image which contains some form of malware which is triggered from the page.The code could redirect the user to a site that downloaded malware, or it could send details of what the user does, to an attacker, whether in government or private crime. Or it could frame the user, making it look like they were committing a crime or trying to cheat a gangster. Fortunately almost all attackers are only in it for the money, which simplifies the defender's job immensely. 

DOM Based Cross site Scripting
When a browser executes Javascript it makes a number of Javascript Objects available to the code. These represent the Document Object Model (DOM) which is what the browser experiences. The DOM is populated according to the browser's understanding of the model. For example document.URL and document.location are populated with the URL of the page, as the browser understands it. They are invisible to the user. Sometimes the page will hold Javascript that parses document.URL and decides on an action, for example writing the value of a parameter to the page.
The danger arises when the original request sends a parameter value that contains malicious code. If the malicious code is hidden behind a # (known technically as a fragment identifier the code after the # is treated as a comment and may not even be sent to the server.

his section outlines some basic defences against XSS. A full set of defences is given in the cheat sheets below. Mostly they rely on escaping and encoding whaich is best handled by a trusted thord party library

Some browsers provide some protection against cross site scripting, for example by encoding special characters that Javascript uses, such as “<” and “>” into safe forms such as “%3C” and “%3E” but these can be evaded by an attack that does not need the raw forms of these characters. Encoding provides a useful layer of defence and there are third party libraries that provide this function but it is not a silver bullet.
Apart from encoding all input and output (and the encoding needed differs according to the context), two techniques are useful: Blacklisting, where the request is rejected if it contains dangerous characters, and white listing, where the request is rejected unless it contains only safe characters.

Generally speaking both black and whitelisting should be used. For example a request containing a name can be rejected if it contains “<” and accepted if it contains only alphanumeric characters. Of course whitelisting gets more complicated for languages like Chinese, Hebrew or Arabic and can be vetoed by budget conscious managers, but the principles remain valid.

In practice white and blacklisting tend to rely on regular expressions, and these need to be tested thoroughly before a product is released.

In brief an effective defence against Stored and Reflected Cross Site Scripting, is server side data validation. Both types can be detected by manual fault injection e.g typing an alert scriptlet into a field. An effective defence againsr DOM based XSS is client side validation of all DOM objects as they are used or changing server side logic to avoid using DOM properties. Since client pages are often server supplied pages it is again the responsibility of the server to protect the user.

Another, perhaps weaker defence is to include an anti-XSS header in the HTML response supplied by the server. This can either be done programatically in a servlet filter or the web server configuration. Which to use is often a matter of taste: filters still need to be configured in the server or application configuration files, but if you have a number of filters already the extra cost is marginal. This option may not be supported by all browsers and is not supported by old browsers.

Dom Based XSS is a bit trickier to handle. Defences include
  • Avoiding parsing and manipulating DOM objects
  • Sanitising and handling references to DOM Objects carefully. 
The Wrap

The constant arms race between attackers and defenders means there is no sure defence against XSS, or any other attack. The defences here and in the references may however change the economics of an attack so that it is not worthwhile for the attacker who may decide other tactics, like setting up a “legitimate” server that provides a service with a bit of theft on the side, would be easier and more profitable. This seems to be why we have phishing sites.

If you want to try out XSS attacks it is best to do so on a site you own. Doing so on a site you do not own could lead to a knock on the door at 6am. This would tend to ruin your day.

If you enjoyed this or found it useful please consider supporting this blog financially. Or at least share this post.  
You can pay with paypal using the button below

References A clear explanation of how XSSworks DOM Based XSS or XSS of the Third Kind  DOM Based Cross Site sScripting Prevention Cheat Sheet Beginners guide to Cross Site Scripting (XSs)