The Cookies dialog also shows when this cookie expires. The useful bit was the Content value for the JSESSIONID. If you click on “Show Cookies” in the Firefox Options, you can search for the wiki URL.
The approach I found to work is to use Firefox to access the site, logging on with my userid and password, then letting use the cookie generated by my Firefox session. wget provides several approaches to this, from HTTP authentication to letting you spoof form variables in the commands it sends. wgetrc file in my home directory and add “ robots = off” to it. And by default, wget respects instructions in robots.txt. The robots.txt file for where our wiki is hosted is configured to prevent automated bots from leeching content from the site. I already had wget on my Ubuntu desktop, but if you are on Windows you can google for “wget for windows”. I’ve put the steps I took here, in case they will be useful for others.
WIKI OFFLINE OFFLINE
Yesterday, I had a play with wget to try and download an offline copy of the wiki to use as a backup for when it isn’t working or is going painfully slow. Key information that I need is in that wiki, and when the wiki goes down it can be difficult and frustrating. Wikis can be a fantastic tool for collaboration, and this wiki is a single place where we can share information and our progress.īut we’ve been having problems with the reliability of the wiki – it is unavailable at times, and can be painfully slow at others. We use a Confluence wiki for one of the projects that I work on.