Every SEO eventually gets fixated on a tactic. Maybe you read 100 blog posts about how to build the “perfectly” optimized URL, and you keep tweaking and tweaking until you get it just right. Fast-forward 2 months – you’re sitting on 17 layers of 301-redirects, you haven’t done any link-building, you haven’t written any content, you’re eating taco shells with mayonnaise for lunch, and your cat is dead.
Ok, maybe that’s a bit extreme. I do see a lot of questions about the "ideal" URL structure in Q&A, though. Most of them boil down to going from pretty good URLs to slightly more pretty good URLs.
All Change Is Risky
I know it’s not what the motivational speakers want you to hear, but in the real world, change carries risk. Even a perfectly executed site-wide URL change – with pristine 301-redirects – is going to take time for Google to process. During that time, your rankings may bounce. You may get some errors. If your new URL scheme isn’t universally better than the old one, some pages may permanently lose ranking. There’s no good way to A/B test a site-wide SEO change.
More often, it’s just a case of diminishing returns. Going from pretty good to pretty gooder probably isn’t worth the time and effort, let alone the risk. So, when should you change your URLs? I’m going to dive into 5 specific scenarios to help you answer that question…
(1) Dynamic URLs
A dynamic URL creates content from code and data and carries parameters, like this:
It’s a common SEO misconception that Google can’t read these URLs or gets cut off after 2 or 3 parameters. In 2011, that’s just not true – although there are reasonable limits on URL length. The real problems with dynamic URLs are usually more complex:
- They don’t contain relevant keywords.
- They’re more prone to creating duplicate content.
- They tend to be less user-friendly (lower click-through).
- They tend to be longer.
So, when are your URLs too dynamic? The example above definitely needs help. It’s long, it has no relevant keywords, the color and size parameters are likely creating tons of near-duplicates, and the session ID is creating virtually unlimited true duplicates. If you don’t want to be mauled by Panda, it’s time for a change.
In other cases, though, it’s not so simple. What if you have a blog post URL like this?
It’s technically a “dynamic” URL, so should you change it to something like:
I doubt you’d see much SEO benefit, or that the rewards would outweigh the risks. In a perfect world, the second URL is better, and if I was starting a blog from scratch I’d choose that one, no question. On an established site with 1000s of pages, though, I’d probably sit tight.
(2) Unstructured URLs
Another common worry people have is that their URLs don’t match their site structure. For example, they have a URL like this one:
...and they think they should add folders to represent their site architecture, like:
There’s a false belief in play here – people often think that URL structure signals site structure. Just because your URL is 3 levels deep doesn’t mean the crawlers will treat the page as being 3 levels deep. If the first URL is 6 steps from the home-page and the second URL is 1 step away, the second URL is going to get a lot more internal link-juice (all else being equal).
You could argue that the second URL carries more meaning for visitors, but, unfortunately, it’s also longer, and the most unique keywords are pushed to the end. In most cases, I’d lean toward the first version.
Of course, the reverse also applies. Just because a URL structure is “flat” and every page is one level deep, that doesn’t mean that you’ve created a flat site architecture. Google still has to crawl your pages through the paths you’ve built. The flatter URL may have some minor advantages, but it’s not going to change the way that link-juice flows through your site.
Structural URLs can also create duplicate content problems. Let’s say that you allow visitors to reach the same page via 3 different paths:
Now, you’ve created 2 pieces of duplicate content – Google is going to see 3 pages that look exactly the same. This is more of a crawl issue than a URL issue, and there are ways to control how these URLs get indexed, but an overly structured URL can exacerbate these problems.
(3) Long URLs
How long of a URL is too long? Technically, a URL should be able to be as long as it needs to be. Some browsers and servers may have limits, but those limits are well beyond anything we’d consider sane by SEO or usability standards. For example, IE8 can support a URL of
up to 2,083 characters.
Practically speaking, though, long URLs can run into trouble. Very long URLs:
- Dilute the ranking power of any given URL keyword
- May hurt usability and click-through rates
- May get cut off when people copy-and-paste
- May get cut off by social media applications
- Are a lot harder to remember
How long is too long is a bit more art than science. One of the key issues, in my mind, is redundancy. Good URLs are like good copy – if there’s something that adds no meaning, you should probably lose it. For example, here’s a URL with a lot of redundancy:
If you have a “/store” subfolder, do you also need a “/products” layer? If we know you’re in the store/products layer, does your category have to be tagged as “featured-products” (why not just “featured”)? Is the “featured” layer necessary at all? Does each product have to also be tagged with “product-“? Are the waffles so tasty you need to say it twice?
In reality, I’ve seen much longer and even more redundant URLs, but that example represents some of the most common problems. Again, you have to consider the trade-offs. Fixing a URL like that one will probably have SEO benefits. Stripping “/blog” out of all your blog post URLs might be a nice-to-have, but it isn’t going to make much practical difference.
(4) Keyword Stuffing
Scenarios (3)-(5) have a bit of overlap. Keyword-stuffed URLs also tend to be long and may cannibalize other pages. Typically, though a keyword-stuffed URL has either a lot of repetition or tries to tackle every variant of the target phrase. For example:
It’s pretty rare to see a penalty based solely on keyword-stuffed URLs, but usually, if your URLs are spammy, it’s a telltale sign that your title tags,
’s, copy, etc. are spammy. Even if Google doesn’t slap you around a little, it’s just a matter of focus. If you target the same phrase 14 different ways, you may get more coverage, but each phrase will also get less attention. Prioritize and focus – not just with URLs, but all keyword targeting. If you throw everything at the wall to see what sticks, you usually just end up with a dirty wall.
(5) Keyword Cannibalization
This is probably the toughest problem to spot, as it happens over an entire site – you can’t spot it in a single URL (and, practically speaking, it’s not just a URL problem). Keyword cannibalization results when you try to target the same keywords with too many URLs.
There’s no one right answer to this problem, as any site with a strong focus is naturally going to have pages and URLs with overlapping keywords. That’s perfectly reasonable. Where you get into trouble is splitting off pages into a lot of sub-pages just to sweep up every long-tail variant. Once you carry that too far, without the unique content to support it, you’re going to start to dilute your index and make your site look “thin”.
The URLs here are almost always just a symptom of a broader disease. Ultimately, if you’ve gotten too ambitious with your scope, you’re going to need to consolidate those pages, not just change a few URLs. This is even more important post-Panda. It used to be that thin content would only impact that content – at worst, it might get ignored. Now, thin content can jeopardize the rankings of your entire site.
Proceed With Caution
If you do decide a sitewide URL change is worth the risk, plan and execute it carefully. How to implement a sitewide URL change is beyond the scope of this post, but keep in mind a couple of high-level points:
- Use proper 301-redirects.
- Redirect URL-to-URL, for every page you want to keep.
- Update all on-page links.
- Don’t chain redirects, if you can avoid it.
- Add a new XML sitemap.
- Leave the old sitemap up temporarily.
Point (3) bears repeating. More than once, I’ve seen someone make a sitewide technical SEO change, implement perfect 301 redirects, but then not update all of their navigation. Your crawl paths are still the most important signal to the spiders – make sure you’re 100% internally consistent with the new URLs.
That last point (6) is a bit counterintuitive, but I know a number of SEOs who insist on it. The problem is simple – if crawlers stop seeing the old URLs, they might not crawl them to process the 301-redirects. Eventually, they’ll discover the new URLs, but it might take longer. By leaving the old sitemap up temporarily, you encourage crawlers to process the redirects. If those 301-redirects are working, this won’t create duplicate content. Usually, you can remove the old sitemap after a few weeks.
Even done properly and for the right reasons, measure carefully and expect some rankings bounce over the first couple of weeks. Sometimes, Google just needs time to evaluate the new structure.
Posted by
Dr. Pete
Source :
SEOmoz