TL;DR of the post below; You have to perform all these steps & in some instances, you can squeeze out better performance from other browsers too:
- This is highly optional but convert your ids to xpath. Yes, you heard that right - xpath look up (with the first 2 steps above) is at least 3 times faster than plain id lookup in IE.
All the above will practically bring your IE performance really close to that of Firefox. Now for the original post with details:
Here is some info on what we did to improve the IE performance with regards to Selenium. Before we get started, you have to familiarize yourself with a few key related technologies:
- XPath - http://www.w3.org/TR/xpath/
- CSS Selectors - http://www.w3.org/TR/css3-selectors/
- DOM - http://www.w3.org/DOM/
- JSON - http://www.json.org/
Selenium launches the app under test browser window in two modes - single window & multi window. Default is always multi window wherein, the first one is the driver window which processes all the commands & the second window is the actual app under test. In single window mode, the app under test is loaded in to the lower frame, which means the app under test cannot be frame busting - meaning the app under test should not have any frames or popup windows. Since our app is frame busting, we have to use the default mode - multi window. When you send a command to evaluate an XPath query, the driver window uses the DOM handle of the app under test & runs that query by traversing the DOM.
- Constant chatter between the driver window & the app under test window, which was addressed recently by the Selenium devs (check out the latest code from http://code.google.com/p/selenium). The driver window ends up doing a DOM traversal on the app under test after processing the XPath query.
CSS Selectors to rescue?
Our automation framework depends on Selenium to interact with the web pages (Juniper SSL VPN admin pages). It has various “get” & “set” methods to read & write values to those web pages. We implemented a convenient method in our framework called “table-list”, which reads the list of items listed in a specific table format and was widely used across all the admin pages. Reads are always costly as individual pages can be huge for e.g. Auth Servers page (linked to the bug above) was taking about 25 minutes to finish listing the auth servers with the ajaxslt library. For comparison, on FF it was taking close to 4 seconds. So after digging in the forums for a while, CSS selectors was offered as a faster alternative.
If you have ever worked with libxml2 XML processing library, you’ll know about a little nugget known as “context” nodes. In a structured document like XML & XHTML, you can traverse the document as a tree & if you want to do repetitive processing on a set of nodes, you chose the parent of those nodes & make it a temporary root node and this node is known as “context” node. That means all your further processing will work based on this particular context & you don’t have to traverse the whole document for each processing of a query. Well there should be something similar in XPath too right? Yes of coursei, there was.