There seems to be a hardware fault which I'm attempting to track down -page:
Information or features may not work in HTML mode.
This bot will search for references which are only made of a link without title,
(i.e. <ref>[]</ref> or <ref></ref>)
and will fetch the html title from the link to use it as the title of the wiki
link in the reference, i.e.
<ref>[ test - Google Search]</ref>

The bot checks every 20 edits a special stop page : if the page has been edited,
it stops.

DumZiBoT is running that script on en: & fr: at every new dump, running it on
de: is not allowed anymore.

As it uses it, you need to configure for your wiki, or it will
not work.

pdfinfo is needed for parsing pdf titles.

See [[:en:User:DumZiBoT/refLinks]] for more information on the bot.

-cat:             Work on pages in a specific category

-headlinks:       Works on all links that are in heading tags (== [[link]] ==)

-links:           Works on all links on the given page

-new              Works on the 10 newest pages

-page:            name of the page you want to work on

-prefixindex:     Works on all pages with the given prefix

-ref:             Work on all pages linking to the given page



-weblink:        Works on all pages with the given external link

-limit:n          Stops after n edits

-xml:dump.xml     Should be used instead of a simple page fetching method from
         for performance and load issues

-xmlstart         Page to start with when using an XML dump

-ignorepdf        Do not handle PDF files (handy if you use Windows and can't
                  get pdfinfo)