The Problem
I visit a site looking for a link exchange and I look at the other links they have and wonder if I can exchange links with those ones too. I visit them and then find that they’re already listed. But I have 127 categories so I have to use the internal search tool and it adds significantly to the time spent building up my links.
A Solution
Well Link Manager for Link Exchangers is not quite the right label but it was the best I could do.
This is a groovy little script which takes a url and scrapes the page looking for outbound links. Those it finds it then compares against my own links database and presents the results to me. The output is formatted as per the admin “submit a site” form in my directory so when I submit it is automatically approved with all the data.
I still have to schlep around finding the right category, checking that the site is one I want, requesting the recipricol link etc but a large amount of the donkey work is already done.
It’s freely available
I’ve posted the script for this at Weberdev.com: http://www.weberdev.com/get_example-4119.html because they have a cool code contest which rewards you with licensed software
If you want to try it then there are a few steps to getting it working
- The class has 4 variables used for connecting to the database.
- $query needs to be updated with the actual query used to check for a links existence.
- $form works for my old version of the wsnlinks directory. You need to change this so that it works for the directory system you use.
- $useCurl should be true if curl is available.
fsockopen functions haven’t been fully tested - $user_agent should be set to your site name.
- getScratchPad is for static information you need to copy and paste from time to time.
I’ve just posted a cut down version of this script in my test bed.