SEO News
SEO Expert #1 In Online Marketing
  • Home
  • Houston SEO Expert
  • Houston SEO
  • SEO Houston
  • San Antonio SEO Expert
  • SEO Houston Pros
  • SEO
  • Blog
  • Texas SEO Expert
  • Houston SEO Consultant
  • The Single Best Strategy To Use For SEO Expert
  • Fascination About SEO Expert
  • A Review Of San Antonio SEO Consultant
  • The Fact About San Antonio SEO Company That No One Is Suggesting
  • Austin SEO Expert
  • Dallas SEO Expert

Webinar: Transform your content operations with DAM

5/18/2022

0 Comments

 

When it comes to promoting and selling products, content is the beginning of everything. The demand for content management is greater than ever as customers receive information across an ever-increasing number of channels.

Disorganized content workflows can be a recipe for disaster. So it’s imperative that product assets are organized, controlled, and accessible to a range of internal and external stakeholders. Join experts from McCormick & Company and Acquia as they discuss the challenges, opportunities, and lessons learned through McCormick’s DAM journey.

Register today for “Content Comes First: Transform Your Operations With DAM,” presented by Acquia.

The post Webinar: Transform your content operations with DAM appeared first on Search Engine Land.



a message brought to you by Wayne Vass SEO

This article first appeared on: The post %%POSTLINK%% appeared first on %%BLOGLINK%%.
0 Comments

Lucid visibility: How a publisher broke into Google Discover in less than 30 days from launch

5/18/2022

0 Comments

 

Google Discover is one of the most sought-after traffic sources by publishers, while also being one of the most confusing from a visibility standpoint. For many, it’s an enigma. For example, some publishers I help receive millions of clicks per month, while others receive absolutely none. And a ton of traffic can turn into no traffic in a flash, just like when a broad core update rolls out. More on that soon.

Google has explained that it’s looking for sites that “contain many pages that demonstrate expertise, authoritativeness, and trustworthiness (E-A-T)” when covering which content ranks in Discover. Strong E-A-T can take a long time to build up, or so you would think. For example, when a new site launches and has no history, no links, etc., it often can take a long time to cross the threshold where it can appear in Discover (and consistently).

That’s why the case study I’m covering in this post is extremely interesting. I’ll cover a brand-new site, run by a well-known person in the tech news space, and the site broke into Discover in a blistering four weeks from its launch. It is by far the quickest I have seen a new site start ranking in Discover.

And if you’re in the SEO industry, I’m sure you’ll get a kick out of who runs the site. It’s no other than Barry Schwartz, the driving force behind a lot of the news we consume in the SEO industry. But his new site has nothing to do with SEO, Search Engine Roundtable, or Search Engine Land.

Or does it? I’ll cover more about that in the article below. Let’s jump in.

Barry’s unrelenting blogging process:

As many of you know, Barry’s work ethic is insanely strong. When he decides to do something, look out. So as you can guess, Barry took his Search Engine Roundtable blogging process and employed that for Lucid Insider, his blog dedicated to news about Lucid Motors, a car manufacturer producing the luxury electric sedan Lucid Aid. He blogs every day, with multiple posts covering what’s going on in the Lucid World.

Over the past several days, I went through most of his posts, about 150 so far, and I feel completely up-to-speed on Lucid. It’s sort of the way the SEO industry feels when reading Search Engine Roundtable. I bring this up just so you have an understanding of content production on Lucid Insider.

There are now 204 urls indexed on the site and the first article was published on March 15, 2022. I’ll come back to dates soon.

Time To Discover Visibility (TTDV)

Haven’t heard of TTDV yet? That’s because I just made it up. Lucid Insider started surfacing in Discover in just four weeks from the time the site launched. For the most part, that’s before any (consistent) strong signals could be built from an E-A-T standpoint, before earning many links, before publishing a ton of content on the topic, etc.

And since the first article broke into Discover, Lucid Insider has consistently appeared in the feed (both as articles and Web Stories). As of a few days ago, Discover has driven 8,351 clicks to the site out of 9,726 total clicks from Google. Traffic from Google Search is building but has only accounted for 14% of total clicks so far. Discover is now 86% of total clicks from Google.

Articles and Web Stories with a nudge from a friend:

After Barry launched Lucid Insider, I pinged him and said he should build some Web Stories, especially as Discover showed signs of life for Lucid Insider. I have covered Web Stories heavily since they launched and have built several of my own stories covering various SEO topics. I have seen first-hand what they can do visibility-wise in Search and in Discover.

Specifically for Discover, Google has a Web Stories carousel, which prominently displays a number of stories in a special feed treatment. So, I thought Barry could create some stories to possibly rank there. And that definitely worked to an extent. Both articles and Web Stories have ranked in Discover for Lucid Insider, although most of the Web Story traffic came from just one story. That story had a strong click through rate of nearly 8%, but it’s really the only one that drove any substantial traffic.

Here is an example of that top Web Story from Lucid Insider appearing in Discover’s story carousel. The feed treatment is killer and can drive a lot of impressions and clicks.

Google News: No Visibility At All… Until This Week!

Since Discover is often tied closely to news publishers, you would think Lucid Insider would have also appeared in Google News – but that wasn’t the case. That was until this week, though! The reporting didn’t even show up in Search Console for Google News until Monday. Sure, it’s not a lot of visibility yet, but this is a brand-new website. So Google is expanding the visibility of Lucid Insider to Google News now and in just two months. Definitely a good sign for Barry.

It’s also worth noting that Barry doesn’t even have a Google Publisher Center account set up. That doesn’t impact visibility in Google News, but most publishers set one up, since you can control several aspects of your publisher account, including site sections, logo, etc.

Search Is Growing:

Before I dig into the possibilities of why and how Lucid Insider broke into Discover so quickly, I wanted to touch on Search. Although driving much lower traffic levels as a percentage of traffic, it is growing over time and will probably continue to do so.

Barry’s blog is starting to rank for a number of queries, and even has some featured snippets already. Just like with Search Engine Roundtable, I’m confident that Lucid Insider will do well in Search. Google’s algorithms just need more time in my opinion, which is at odds with Discover’s algorithms at this point.

That’s a good segue to the next part of the post where I’ll cover the possible reasons why, and how, Lucid Insider is ranking so quickly in Discover. Join me as I travel down the rabbit hole…

E-A-T:

I won’t go in-depth about E-A-T overall, since there are many other posts you can read about that. But it’s important to understand that Google explains in its Discover documentation that it looks for sites exhibiting high levels of E-A-T when determining what should appear in the feed.

It’s also important to understand that Google has explained E-A-T is heavily influenced by links and mentions from authoritative sites.

I asked Gary about E-A-T. He said it's largely based on links and mentions on authoritative sites. i.e. if the Washington post mentions you, that's good.

He recommended reading the sections in the QRG on E-A-T as it outlines things well.@methode #Pubcon

— Dr. Marie Haynes? (@Marie_Haynes) February 21, 2018

Therefore, I immediately jumped into the link profile of Lucid Insider. Remember, it has only been around since March 15, 2022. When digging into the Links report in GSC, there were only 92 links there. And most of them were from SEO-related sites and content, and not automotive content.

That’s because Barry had mentioned his new blog on Search Engine Roundtable and Search Engine Land, and those sites get copied and scraped all the time, so those links end up on many other sites focused on Search. I’m not saying those other sites are providing a ton of power or value link-wise, but it’s worth noting.

From an E-A-T standpoint, both Search Engine Roundtable and Search Engine Land are authoritative sites, but they don’t focus on Lucid Motors, electric cars, automotive news in general, etc. So it’s weird to think Google would provide a ton of value from those links over the long-term, since they aren’t topically relevant at all.

I bolded “over the long-term” because I have a feeling Google’s algorithms are still figuring things out for Lucid Insider. Longer-term, I’m not sure SEO-related links will help Lucid Insider as much, as Google’s algorithms determine more about the site, its content, focus, etc.

There are definitely a few links from Lucid forums to Lucid Insider, but not many yet… And check out Majestic’s topical trust flow, just to understand the topics Lucid Insider are associated with from an inbound links standpoint. The topics reflect an SEO blog and not a blog covering Lucid Air, at least for now.

But It’s BARRY! The Man, The Brick, The Legend

For journalists, Google can connect the dots and understand articles published across sites. Google has explained this before in a blog post about journalists and you can see it firsthand in the SERPs. For example, Google can provide an Articles carousel in the SERPs for certain journalists.

So, is Barry being the author causing Lucid Insider to break into Discover faster than another author would? Could Google really be doing that??

I’m pretty sure that’s not the case. I’ll explain more below.

Barry has an Article carousel in the SERPs, but it only contains Search Engine Land and Search Engine Roundtable articles. Lucid Insider isn’t there. The carousel shows up when you surface Barry’s Knowledge Panel when searching for “Barry Schwartz technologist”.

In addition, Barry isn’t even using Article structured data (or any structured data) to feed Google important information about the article content, including author information. And he could be helping Google connect the dots by providing author information, including author.url, which is a property of Article structured data. That is what Google recommends providing, especially for journalists that write for various news publishers.

And when checking Barry’s Wikipedia page and Knowledge Panel, there is no mention of Lucid Insider. Barry should probably hop on that, but it’s not there as of now.

And last, even using a related query for Lucid Insider doesn’t yield any results (which would show you related sites to the domain you enter). Again, this will change over time as Google understands more about the site, the content, the focus, etc.

So, I doubt Google is making the connection there based on Barry being the author. Let’s move on.

Fresh Topic, Rabid Fans, Big Opportunity:

Lucid Air is a new electric car manufacturer, so there is clearly not as much written or covered as Tesla or other car manufacturers. That could definitely be playing an important factor with Lucid Insider’s visibility in Discover. For example, there’s less content to choose from when selecting content to show in a user’s Discover feed, when that person shows an interest in Lucid.

To give some context, here is Google Trends data for interest in Lucid Motors compared to Tesla, Inc.

And here is Google Trends data showing interest in the various models (Lucid Air versus the Tesla 3 or S):

In addition, there are some serious Lucid fans out there. So those people are eager to check out news and information from various sources of Lucid information. And Discover is based on a person’s interests and activities, so travels across the Lucidsphere could be leading Discover’s algorithms to surface more Lucid information in their feed and maybe the algorithm is hungry for that information. And again, there isn’t as much content for Lucid as other topics, at least yet.

Here is the top Lucid forum showing some of the activity there. And check out the sidebar… there’s a familiar face there.

It will be interesting to see how Lucid Insider performs in Discover as more sources of information hit the web. I saw this first-hand with Web Stories in Discover. When the carousel first launched, I was early to publish a Web Story. And it received 304K impressions in Discover. That wasn’t the case when I published subsequent stories, as more and more publishers started creating Web Stories. Anyway, we’ll see how it goes as more sites cover Lucid Air.

No Structured Data, No Open Graph Tags…

I mentioned earlier that Barry isn’t using any structured data at all for his articles. Well, he’s not using Open Graph tags either. Note, Open Graph tags are not a ranking factor, but they do provide more information about each article, including which images to use when shared. And Google Discover can use that larger image provided by Open Graph tags.

When checking the various social debugging tools, you can see that Facebook is dynamically populating open graph tags via other tags its finds on the page, the Twitter card validator bombs, etc.

I’m just providing this information to show that Barry has a relatively basic setup from a technical standpoint. And he’s ranking in Discover despite not providing all the bells and whistles.

Side Note: Desktop-first Indexing. What?

Google has explained in its developer documentation that mobile-first indexing will be enabled for all new sites starting in July of 2019. Well, for Lucid Insider desktop-first indexing is enabled. Just an interesting side note I discovered (pun intended) while digging into the data and reporting.

That said, the crawl stats show mostly smartphone crawling, so maybe Google Search Console lags for displaying the correct indexing model for Lucid Insider. Again, just an interesting find.

Broad core updates and Discover impact:

I mentioned earlier that Discover visibility can be impacted by broad core updates. I have covered that heavily in my blog posts about broad core updates, in presentations covering the topic, and on Twitter. Google’s John Mueller has explained this as well and it’s in the official blog post about broad core updates.

The reason I mention this is because Lucid Insider was launched in between broad core updates (and a fresh one hasn’t launched yet). So, could Lucid Insider be seeing a surge in Discover because Google doesn’t have enough signals yet to accurately measure site quality and relevance and a broad core update hasn’t been released? As Google’s quality algorithms are refreshed with the next broad core update, will Lucid Insider disappear from Discover?

It’s totally possible. Even Barry understands that what he’s experiencing is super-fast for breaking into Discover and now Google News. Four weeks is nothing when some other sites are still not in Discover, years down the line. We’ll be watching closely as the next broad core update rolls out which will hopefully be soon. We are due, that’s for sure.

Also, John Mueller has explained in the past that some newer sites might rank very well in Search in the short-term until Google’s algorithms can pick up more signals about the site, content quality, relevance, etc. And once it does, then the site could drop, or even surge. So just like I explained above, Lucid Insider can potentially see volatility as Google picks up more signals, understands its place in the web ecosystem, etc.

Here is my tweet about that segment regarding Search. You can check out the video I linked to from the tweet where Google’s John Mueller explains more about this:

That's pretty normal for a site that's trusted by Google. I've seen pages get indexed that way within minutes of publishing and rank well immediately. Google has explained they need to estimate where it should rank to start. That can change as it learns: https://t.co/xteCwTik3L

— Glenn Gabe (@glenngabe) November 2, 2018

Summary: Will Lucid Insider’s Discover Visibility Remain Strong?

So there you have it. Lucid Insider broke into Discover extremely quickly and is driving a majority of the traffic to the site now. Search is increasing, but Discover is 86% of Google traffic as of now. Although I covered a number of areas that could be helping Barry appear and thrive in Discover, this may be short-lived, at least until the site can publish much more content, earn stronger links and mentions from automotive sites, blogs, forums, etc., and build stronger E-A-T overall.

And in true Barry-form, he will be freely sharing how the site is performing over time across Google surfaces. So stay tuned for updates on how Lucid Insider is performing in Search, Discover, and Google News. And on that note, we’re expecting a broad core update soon, so it will be extremely interesting to see how that impacts the Discover visibility for Lucid Insider.

The post Lucid visibility: How a publisher broke into Google Discover in less than 30 days from launch appeared first on Search Engine Land.



a message brought to you by Wayne Vass SEO

This article first appeared on: The post %%POSTLINK%% appeared first on %%BLOGLINK%%.
0 Comments

Weve crawled the web for 32 years: Whats changed?

5/18/2022

0 Comments

 

It was 20 years ago this year that I authored a book called “Search Engine Marketing: The Essential Best Practice Guide.” It is generally regarded as the first comprehensive guide to SEO and the underlying science of information retrieval (IR).

I thought it would be useful to look at what I wrote back in 2002 to see how it stacks up today. We’ll start with the fundamental aspects of what’s involved with crawling the web.

It’s important to understand the history and background of the internet and search to understand where we are today and what’s next. And let me tell you, there is a lot of ground to cover.

Our industry is now hurtling into another new iteration of the internet. We’ll start by reviewing the groundwork I covered in 2002. Then we’ll explore the present, with an eye toward the future of SEO, looking at a few important examples (e.g., structured data, cloud computing, IoT, edge computing, 5G),

All of this is a mega leap from where the internet all began.

Join me, won’t you, as we meander down search engine optimization memory lane.

An important history lesson

We use the terms world wide web and internet interchangeably. However, they are not the same thing. 

You’d be surprised how many don’t understand the difference. 

The first iteration of the internet was invented in 1966. A further iteration that brought it closer to what we know now was invented in 1973 by scientist Vint Cerf (currently chief internet evangelist for Google).

The world wide web was invented by British scientist Tim Berners-Lee (now Sir) in the late 1980s.

Interestingly, most people have the notion that he spent something equivalent to a lifetime of scientific research and experimentation before his invention was launched. But that’s not the case at all. Berners-Lee invented the world wide web during his lunch hour one day in 1989 while enjoying a ham sandwich in the staff café at the CERN Laboratory in Switzerland.

And to add a little clarity to the headline of this article, from the following year (1990) the web has been crawled one way or another by one bot or another to this present day (hence 32 years of crawling the web).

Why you need to know all of this

The web was never meant to do what we’ve now come to expect from it (and those expectations are constantly becoming greater).

Berners-Lee originally conceived and developed the web to meet the demand for automated information-sharing between scientists in universities and institutes around the world.

So, a lot of what we’re trying to make the web do is alien to the inventor and the browser (which Berners-Lee also invented).

And this is very relevant to the major challenges of scalability search engines have in trying to harvest content to index and keep fresh, at the same time as trying to discover and index new content.

Search engines can’t access the entire web

Clearly, the world wide web came with inherent challenges. And that brings me to another hugely important fact to highlight.

It’s the “pervasive myth” that began when Google first launched and seems to be as pervasive now as it was back then. And that’s the belief people have that Google has access to the entire web.

Nope. Not true. In fact, nowhere near it.

When Google first started crawling the web in 1998, its index was around 25 million unique URLs. Ten years later, in 2008, they announced they had hit the major milestone of having had sight of 1 trillion unique URLs on the web.

More recently, I’ve seen numbers suggesting Google is aware of some 50 trillion URLs. But here’s the big difference we SEOs all need to know:

  • Being aware of some 50 trillion URLs does not mean they are all crawled and indexed.

And 50 trillion is a whole lot of URLs. But this is only a tiny fraction of the entire web.

Google (or any other search engine) can crawl an enormous amount of content on the surface of the web. But there’s also a huge amount of content on the “deep web” that crawlers simply can’t get access to. It’s locked behind interfaces leading to colossal amounts of database content. As I highlighted in 2002, crawlers don’t come equipped with a monitor and keyboard!

Also, the 50 trillion unique URLs figure is arbitrary. I have no idea what the real figure is at Google right now (and they have no idea themselves of how many pages there really are on the world wide web either).

These URLs don’t all lead to unique content, either. The web is full of spam, duplicate content, iterative links to nowhere and all sorts of other kinds of web debris.

  • What it all means: Of the arbitrary 50 trillion URLs figure I’m using, which is itself a fraction of the web, only a fraction of that eventually gets included in Google’s index (and other search engines) for retrieval.

Understanding search engine architecture

In 2002, I created a visual interpretation of the “general anatomy of a crawler-based search engine”:

Clearly, this image didn’t earn me any graphic design awards. But it was an accurate indication of how the various components of a web search engine came together in 2002. It certainly helped the emerging SEO industry gain a better insight into why the industry, and its practices, were so necessary.

Although the technologies search engines use have advanced greatly (think: artificial intelligence/machine learning), the principal drivers, processes and underlying science remain the same.

Although the terms “machine learning” and “artificial intelligence” have found their way more frequently into the industry lexicon in recent years, I wrote this in the section on the anatomy of a search engine 20 years ago:

“In the conclusion to this section I’ll be touching on ‘learning machines’ (vector support machines) and artificial intelligence (AI) which is where the field of web search and retrieval inevitably has to go next.”

‘New generation’ search engine crawlers

It’s hard to believe that there are literally only a handful of general-purpose search engines around the planet crawling the web, with Google (arguably) being the largest. I say that because back in 2002, there were dozens of search engines, with new startups almost every week.

As I frequently mix with much younger practitioners in the industry, I still find it kind of amusing that many don’t even realize that SEO existed before Google was around.

Although Google gets a lot of credit for the innovative way it approached web search, it learned a great deal from a guy named Brian Pinkerton. I was fortunate enough to interview Pinkerton (on more than one occasion).

He’s the inventor of the world’s first full-text retrieval search engine called WebCrawler. And although he was ahead of his time at the dawning of the search industry, he had a good laugh with me when he explained his first setup for a web search engine. It ran on a single 486 machine with 800MB of disk and 128MB memory and a single crawler downloading and storing pages from only 6,000 websites!

Somewhat different from what I wrote about Google in 2002 as a “new generation” search engine crawling the web.

“The word ‘crawler’ is almost always used in the singular; however, most search engines actually have a number of crawlers with a ‘fleet’ of agents carrying out the work on a massive scale. For instance, Google, as a new generation search engine, started with four crawlers, each keeping open about three hundred connections. At peak speeds, they downloaded the information from over one hundred pages per second. Google (at the time of writing) now relies on 3,000 PCs running Linux, with more than ninety terabytes of disk storage. They add thirty new machines per day to their server farm just to keep up with growth.”

And that scaling up and growth pattern at Google has continued at a pace since I wrote that. It’s been a while since I saw an accurate figure, but maybe a few years back, I saw an estimate that Google was crawling 20 billion pages a day. It’s likely even more than that now.

Hyperlink analysis and the crawling/indexing/whole-of-the-web conundrum

Is it possible to rank in the top 10 at Google if your page has never been crawled?

Improbable as it may seem in the asking, the answer is “yes.” And again, it’s something I touched on in 2002 in the book:

From time to time, Google will return a list, or even a single link to a document, which has not yet been crawled but with notification that the document only appears because the keywords appear in other documents with links, which point to it.

What’s that all about? How is this possible?

Hyperlink analysis. Yep, that’s backlinks!

There’s a difference between crawling, indexing and simply being aware of unique URLs. Here’s the further explanation I gave:

“If you go back to the enormous challenges outlined in the section on crawling the web, it’s plain to see that one should never assume, following a visit from a search engine spider, that ALL the pages in your website have been indexed. I have clients with websites of varying degrees in number of pages. Some fifty, some 5,000 and in all honesty, I can say not one of them has every single page indexed by every major search engine. All the major search engines have URLs on the “frontier” of the crawl as it’s known, i.e., crawler control will frequently have millions of URLs in the database, which it knows exist but have not yet been crawled and downloaded.”

There were many times I saw examples of this. The top 10 results following a query would sometimes have a basic URL displayed with no title or snippet (or metadata).

Here’s an example I used in a presentation from 2004. Look at the bottom result, and you’ll see what I mean.

Google is aware of the importance of that page because of the linkage data surrounding it. But no supporting information has been pulled from the page, not even the title tag, as the page obviously hasn’t been crawled. (Of course, this can also occur with the evergreen still-happens-all-the-time little blunder when someone leaves the robots.txt file preventing the site from being crawled.)

I highlighted that sentence above in bold for two important reasons:

  • Hyperlink analysis can denote the “importance” of a page before it even gets crawled and indexed. Along with bandwidth and politeness, the importance of a page is one of the three primary considerations when plotting the crawl. (We’ll dive deeper into hyperlinks and hyperlink-based ranking algorithms in future installments.)
  • Every now and again, the “are links still important” debate flares up (and then cools down). Trust me. The answer is yes, links are still important.

I’ll just embellish the “politeness” thing a little more as it’s directly connected to the robots.txt file/protocol. All the challenges to crawling the web that I explained 20 years ago still exist today (at a greater scale).

Because crawlers retrieve data at vastly much greater speed and depth than humans, they could (and sometimes do) have a crippling impact on a website’s performance. Servers can crash just trying to keep up with the number of rapid-speed requests.

That’s why a politeness policy governed on the one hand by the programming of the crawler and the plot of the crawl, and on the other by the robots.txt file is required.

The faster a search engine can crawl new content to be indexed and recrawl existing pages in the index, the fresher the content will be.

Getting the balance right? That’s the hard part.

Let’s say, purely hypothetically, that Google wanted to keep thorough coverage of news and current affairs and decided to try and crawl the entire New York Times website every day (even every week) without any politeness factor at all. It’s most likely that the crawler would use up all their bandwidth. And that would mean that nobody can get to read the paper online because of bandwidth hogging.

Thankfully now, beyond just the politeness factor, we have Google Search Console, where it’s possible to manipulate the speed and frequency of which websites are crawled.

What’s changed in 32 years of crawling the web?

OK, we’ve covered a lot of ground as I knew we would.

There have certainly been many changes to both the internet and the world wide web – but the crawling part still seems to be impeded by the same old issues.

That said, a while back, I saw a presentation by Andrey Kolobov, a researcher in the field of machine learning at Bing. He created an algorithm to do a balancing act with the bandwidth, politeness and importance issue when plotting the crawl.

I found it highly informative, surprisingly straightforward and pretty easily explained. Even if you don’t understand the math, no worries, you’ll still get an indication of how he tackles the problem. And you’ll also hear the word “importance” in the mix again.

Basically, as I explained earlier about URLs on the frontier of the crawl, hyperlink analysis is important before you get crawled, indeed may well be the reason behind how quickly you get crawled. You can watch the short video of his presentation here.

Now let’s wind up with what’s occurring with the internet right now and how the web, internet, 5G and enhanced content formats are cranking up.

Structured data

The web has been a sea of unstructured data from the get-go. That’s the way it was invented. And as it still grows exponentially every day, the challenge the search engines have is having to crawl and recrawl existing documents in the index to analyze and update if any changes have been made to keep the index fresh.

It’s a mammoth task.

It would be so much easier if the data were structured. And so much of it actually is, as structured databases drive so many websites. But the content and the presentation are separated, of course, because the content has to be published purely in HTML.

There have been many attempts that I’ve been aware of over the years, where custom extractors have been built to attempt to convert HTML into structured data. But mostly, these attempts were very fragile operations, quite laborious and totally error-prone.

Something else that has changed the game completely is that websites in the early days were hand-coded and designed for the clunky old desktop machines. But now, the number of varying form factors used to retrieve web pages has hugely changed the presentation formats that websites must target.

As I said, because of the inherent challenges with the web, search engines such as Google are never likely ever to be able to crawl and index the entire world wide web.

So, what would be an alternative way to vastly improve the process? What if we let the crawler continue to do its regular job and make a structured data feed available simultaneously?

Over the past decade, the importance and usefulness of this idea have grown and grown. To many, it’s still quite a new idea. But, again, Pinkerton, WebCrawler inventor, was way ahead on this subject 20 years ago.

He and I discussed the idea of domain-specific XML feeds to standardize the syntax. At that time, XML was new and considered to be the future of browser-based HTML.

It’s called extensible because it’s not a fixed format like HTML. XML is a “metalanguage” (a language for describing other languages which lets you design your own customized markup languages for limitless diverse types of documents). Various other approaches were vaunted as the future of HTML but couldn’t meet the required interoperability.

However, one approach that did get a lot of attention is known as MCF (Meta Content Framework), which introduced ideas from the field of knowledge representation (frames and semantic nets). The idea was to create a common data model in the form of a directed labeled graph.

Yes, the idea became better known as the semantic web. And what I just described is the early vision of the knowledge graph. That idea dates to 1997, by the way.

All that said, it was 2011 when everything started to come together, with schema.org being founded by Bing, Google, Yahoo and Yandex. The idea was to present webmasters with a single vocabulary. Different search engines might use the markup differently, but webmasters had to do the work only once and would reap the benefits across multiple consumers of the markup.

OK – I don’t want to stray too far into the huge importance of structured data for the future of SEO. That must be an article of its own. So, I’ll come back to it another time in detail.

But you can probably see that if Google and other search engines can’t crawl the entire web, the importance of feeding structured data to help them rapidly update pages without having to recrawl them repeatedly makes an enormous difference.

Having said that, and this is particularly important, you still need to get your unstructured data recognized for its E-A-T (expertise, authoritativeness, trustworthiness) factors before the structured data really kicks in.

Cloud computing

As I’ve already touched on, over the past four decades, the internet has evolved from a peer-to-peer network to overlaying the world wide web to a mobile internet revolution, Cloud computing, the Internet of Things, Edge Computing, and 5G.

The shift toward Cloud computing gave us the industry phrase “the Cloudification of the internet.”

Huge warehouse-sized data centers provide services to manage computing, storage, networking, data management and control. That often means that Cloud data centers are located near hydroelectric plants, for instance, to provide the huge amount of power they need.

Edge computing

Now, the “Edgeifacation of the internet” turns it all back around from being further away from the user source to being right next to it.

Edge computing is about physical hardware devices located in remote locations at the edge of the network with enough memory, processing power, and computing resources to collect data, process that data, and execute it in almost real-time with limited help from other parts of the network.

By placing computing services closer to these locations, users benefit from faster, more reliable services with better user experiences and companies benefit by being better able to support latency-sensitive applications, identify trends and offer vastly superior products and services. IoT devices and Edge devices are often used interchangeably.

5G

With 5G and the power of IoT and Edge computing, the way content is created and distributed will also change dramatically.

Already we see elements of virtual reality (VR) and augmented reality (AR) in all kinds of different apps. And in search, it will be no different.

AR imagery is a natural initiative for Google, and they’ve been messing around with 3D images for a couple of years now just testing, testing, testing as they do. But already, they’re incorporating this low-latency access to the knowledge graph and bringing in content in more visually compelling ways.

During the height of the pandemic, the now “digitally accelerated” end-user got accustomed to engaging with the 3D images Google was sprinkling into the mix of results. At first it was animals (dogs, bears, sharks) and then cars.

Last year Google announced that during that period the 3D featured results interacted with more than 200 million times. That means the bar has been set, and we all need to start thinking about creating these richer content experiences because the end-user (perhaps your next customer) is already expecting this enhanced type of content.

If you haven’t experienced it yourself yet (and not everyone even in our industry has), here’s a very cool treat. In this video from last year, Google introduces famous athletes into the AR mix. And superstar athlete Simone Biles gets to interact with her AR self in the search results.

IoT

Having established the various phases/developments of the internet, it’s not hard to tell that everything being connected in one way or another will be the driving force of the future.

Because of the advanced hype that much technology receives, it’s easy to dismiss it with thoughts such as IoT is just about smart lightbulbs and wearables are just about fitness trackers and watches. But the world around you is being incrementally reshaped in ways you can hardly imagine. It’s not science fiction.

IoT and wearables are two of the fastest-growing technologies and hottest research topics that will hugely expand consumer electronics applications (communications especially).

The future is not late in arriving this time. It’s already here.

We live in a connected world where billions of computers, tablets, smartphones, wearable devices, gaming consoles and even medical devices, indeed entire buildings are digitally processing and delivering information.

Here’s an interesting little factoid for you: it’s estimated that the number of devices and items connected to IoT already eclipses the number of people on earth.

Back to the SEO future

We’ll stop here. But much more to come.

I plan to break down what we now know as search engine optimization in a series of monthly articles scoping the foundational aspects. Although, the term “SEO” wouldn’t enter the lexicon for some while as the cottage industry of “doing stuff to get found at search engine portals” began to emerge in the mid-to-late 1990s. 

Until then – be well, be productive and absorb everything around you in these exciting technological times. I’ll be back again with more in a few weeks.

The post We’ve crawled the web for 32 years: What’s changed? appeared first on Search Engine Land.



a message brought to you by Wayne Vass SEO

This article first appeared on: The post %%POSTLINK%% appeared first on %%BLOGLINK%%.
0 Comments

GA4 isnt all its cracked up to be. What would it look like to switch?

5/18/2022

0 Comments

 

Google Analytics is the top player when it comes to tracking website visitors. The platform’s value is reflected in its popularity, which is why it’s the market leader boasting an 86% share. But with great value comes great responsibility, and Google Analytics lacks in that department.

Designed to maximize data collection often at the expense of data privacy, Google Analytics and its mother company, Google LLC, have been on the radar of European privacy activists for some time now. Reports of questionable privacy practices by Google have led to legal action based on the General Data Protection Regulation (GDPR) that might result in a complete ban on Google Analytics in Europe.

On top of that, Google recently announced it will end support for Universal Analytics in July of 2023, forcing users to switch to Google Analytics 4 (GA4). So, if the switch must be made, why not seek a new analytics provider? There are great free and paid solutions that allow organizations to balance valuable data collection with privacy and compliance. With a GDPR-compliant analytics solution in place, your data collection becomes as it should be predictable and sustainable.  

The problem with GA4 from a user perspective

Universal Analytics’ successor is very different from what you’re familiar with. Apart from the new user interface, which many find challenging to navigate, there is a laundry list of issues with the feature set in GA4—from no bounce rate metrics to a lack of custom channel groups. Here are some of the limitations in GA4 from a user perspective that you might find frustrating.

Not-so-seamless migration

GA4 introduces a different reporting and measurement technology that is neither well understood nor widely accepted by the marketing community. There is no data or tag migration between the platforms, meaning you’d have to start from scratch. The challenge grows with the organization’s size—you can have hundreds of tags or properties to move.

Limits on custom dimensions

A custom dimension is an attribute you configure in your analytics tool to dive deeper into your data. You can then pivot or segment this data to isolate a specific audience or traffic for deeper analysis. While GA4 allows you to use custom dimensions to segment your reports, there’s a strict limit—you can only use up to 50.

Lack of custom channel grouping

Channel groupings are rule-based groupings of marketing channels and, when customized, allow marketers to check the performance of said channels efficiently. Unlike Universal Analytics, GA4 does not allow you to create custom channel groupings in the new interface, only default channel groupings.

Why Google is giving you a short deadline to make the switch to GA4

It’s startling to consider the deadline Google has left the analytics community when it comes to acting: Universal Analytics will stop processing new hits on July 1, 2023. This could be a way to motivate users to migrate more quickly. Perhaps Google was disappointed with the speed of adoption for GA4 and decided to act decisively for this next version.

Another possibility for the short deadline is that Google wants to cut costs and rid itself of technical debt associated with thousands of websites with legacy solutions installed (many of those users are not active users of the product). Since GA4 is designed to support Google’s advertising network, it guarantees more revenue than the competition.

Whatever the case, users need to prepare to move to GA4—or switch to an alternative. 

The problem with GA4 from a privacy standpoint

Google claims the new platform is designed with privacy at its core, but the privacy concerns are far from over. A lack of clear guidelines on data processing has many questioning the legality of GA4 in Europe. Here are some of the reasons that leave us to believe GA4 won’t last long in Europe.

Recent laws and regulations

Google makes it difficult to collect data in line with data protection regulations such as GDPR. This means that organizations engaged in gathering, storing and processing data about EU citizens have to adjust their policies and introduce serious technological changes to be GDPR-compliant.

One of the ​​key compliance issues with Google Analytics is that it saves user data, including information about EU residents, on U.S.-based cloud servers. As a U.S.-based technology company, Google must comply with U.S. surveillance laws, such as the Cloud Act. This legislation states that Google must disclose certain data when requested, even when that data is located outside of the U.S.

In the judgment known as Schrems II, a European court ruled that sending personal data from the EU to the U.S. via transatlantic transfers is illegal if companies can’t guarantee this data will be safe from U.S. intelligence.

Companies with an international presence must now adapt to a wide range of regulations, often with different requirements and restrictions.

Transparency

A Google guide implies data is transferred to the closest Google Analytics server hub. However, the data may be stored in a geographic location that doesn’t have adequate privacy protection to the EU. This lack of transparency poses a problem for Google and organizations using Google Analytics in the EU.

Newly introduced features in GA4 partially address this concern by allowing the first part of data collection (and anonymization) on European servers. However, data can, and most likely will, be sent to the U.S. The best thing to do is be open when it comes to collecting data from people.

With proper transparency, individuals feel a sense of safety and assurance. In return, organizations get more data because individuals now feel taken care of and have the trust needed to provide data.

Time to re-think how you handle consumers’ data

The advantage of these regulations is users’ increased consciousness about their data. This is where alternatives come in handy. They provide you with privacy features you need to comply with laws and obtain the data you want. So, thinking about making the switch to a Google Analytics alternative? Here’s what you need to know.

Addressing concerns about switching to an alternative analytics solution

A lot of users may be hesitant to make the switch. It makes sense—Google has dominated the marketplace for so long that it might feel like too big of a hassle to switch. For a marketing director or CMO to suggest using a different analytics tool and then for that tool to have even more limitations than the last would not be a good look.

You need to make an informed decision and choose the platform whose feature sets fit the organization’s needs to process user-level data while building trust with visitors. Here are the facts and myths when switching:

I’ll lose historical data.

This is a fact, but not for long. Some alternatives have developed data importers in the wake of Universal Analytics (Google Analytics v3) being deprecated.

It’s expensive and hard to switch.

This is a myth. Alternatives are built with easier user interfaces, use similar measurement methodologies, and often have solutions to help with Google Tag’s migrations.

Alternatives don’t offer demographic data. 

This is true: Google’s first-party data add sex, age group, and interests to profile data, and none of the alternatives can offer such data enrichment.

I miss some reporting capabilities.

This is false. Each alternative has unique reporting capabilities, and some are very flexible, allowing for more transformations and data exports than Universal Analytics.

It is easier to run advertising campaigns with Universal Analytics.

This is true. There is deep integration with Google Analytics and Google Ads/Google Marketing Platform, which gives access to an extensive repertoire of data.

I’ll lose my rank in Google Search.

This is a myth. Alternatives’ customers don’t report a lower rank in Google Search. Make sure your site is fast, mobile-friendly, popular (links) and with complete metadata.

The mindset to take when switching.

Marketers considering switching to a new platform need to take a new analytics mindset. We are experiencing a rapidly rising awareness that data is of value and must be protected. Since the future of marketing requires users’ consent, the vendor you choose must allow you to perform analytics in a privacy-friendly way.

Our intention with Piwik PRO Analytics Suite has always been to give clients powerful analytics capabilities along with key privacy and security features. The user interface and feature sets are similar to Universal Analytics, so marketers feel at home when switching to our platform.

Piwik PRO is geared towards both delivering valuable insights and privacy and compliance. Notably, switching to Piwik PRO excludes the privacy and compliance issues associated with Google Analytics to collect data predictably and sustainably. There’s both a free and paid plan, which allows different organizations to get an analytics service tailored to their needs.  If you’d like to learn more about Google Analytics alternatives or get more information on the Piwik PRO Analytics Suite, visit piwik.pro.

This article was written by Maciej Zawadzinski, CEO, Piwik PRO.

The post GA4 isn’t all it’s cracked up to be. What would it look like to switch? appeared first on Search Engine Land.



a message brought to you by Wayne Vass SEO

This article first appeared on: The post %%POSTLINK%% appeared first on %%BLOGLINK%%.
0 Comments

What to look for in a technical SEO audit

5/18/2022

0 Comments

 

According to Techradar, there are more than 547,200 new websites every day. Google has to crawl and store all these sites in their database, therefore occupying physical space on their servers.

The sheer volume of content available now allows Google to prioritize well-designed, fast sites and provide helpful, relevant information for their visitors.  

The bar has been raised, and if your site is slow or has a lot of jargon in the code, Google is unlikely to reward your site with strong rankings.

If you really want to jump ahead of your competitors, you have a huge opportunity to be better than them by optimizing your site’s code, speed and user experience. These are some of the most important ranking signals and will continue to be as the internet becomes more and more inundated with content.

Auditing your website’s technical SEO can be extremely dense and with many moving pieces. If you are not a developer, it may be difficult to comprehend some of these elements.  

Ideally, you should have a working knowledge of how to run an audit to oversee the implementation of technical SEO fixes. Some of these may require developers, designers, writers or editors.

Fortunately, various tools will run the audits for you and give you all the comprehensive data you need to improve your website’s technical performance.

Let’s review some of the data points that will come up, regardless of what technical SEO audit tool you use:

Structure

  • Crawlability: Can Google easily crawl your website, and how often?
  • Security: Is your website secure with an HTTPS certificate?
  • On-page SEO elements: does every page have the keyword in the title tags, meta description, filenames, and paths? Does it have the same on-page elements as sites ranking in the top 10 for your target keywords?
  • Internal links: Does your site have internal links from other site pages? Other elements you can consider are site structure, breadcrumbs, anchor text and link sculpting.
  • Headings: Is the primary KW in the H1? Do you have H2s with supporting keywords?
  • Compliance issues:  Does your site’s code include valid HTML? What is the accessibility score?
  • Images: Do your images load quickly? Are they optimized with title, keywords and srcset attribute? Do you use some new image formats such as webP and SVG?
  • Schema and semantic Web: Are your schema tags in place and set up properly? Some schema tags that you can use include WebPage, BreadcrumbList, Organization, Product, Review, Author/Article, Person, Event, Video/Image, Recipe, FAQ and How-To.
  • Canonicals: Do you have canonical tags in place, and are they set up properly?
  • SiteMap: Do you ONLY have valid pages in the site map, and are redirects and 404 pages removed from the sitemap?

These are simply a few of the elements you’d want to look into that most tools will report on.  

User experience

Google has been placing more focus on ranking factors revolving around user experience. As the web collectively becomes more organized, Google is raising the bar for user experience. Focusing on user experience will ultimately increase their advertising revenue.   

You’ll want to audit the user experience of your website.

  • Is it fast?
  • How quickly is the page interactive?
  • Can it be navigated easily on mobile devices?
  • Is the hierarchy of the site clear and intuitive?

Some of the ways of measuring this include:

  • Site speed
  • Web Core Vitals
  • Mobile-friendliness
  • Structured navigation
  • Intrusive ads or interstitials
  • Design

Make sure you are working with a developer that is well versed in the latest technical SEO elements and who can apply the changes required to raise your SEO performance score.

Technical SEO audit tools

Some of the most popular SEO audit tools include:

  • Semrush Site Audit
  • Screaming Frog
  • SiteBulb
  • Website Auditor
  • ContentKing App
  • GTMetrix
  • Pingdom
  • Google Lighthouse
  • Google Page Speed Insights 

We’ll look at a couple of these tools and the data points you can gain from them.

Semrush site audit

Once you create a project in Semrush, you can run a site audit. Your overview will look like this:

Click on the “Issues” tab, and you’ll see a detailed list of the issues that were uncovered, divided by Errors, Warnings and Notices:

If you click on an item, you’ll see a list of the pages affected by each issue.

Review these as sometimes the data points are not valid.   

Ideally, you should export the CSV for each of these issues and save them in a folder.

Screaming Frog

This desktop tool will use your computer and IP to crawl your website. Once completed, you’ll get various reports that you can download.  

Here are a couple of example reports:

This is an overview report that you can use to track technical audit KPIs.

For example, this report gives you details of the meta titles for each of your pages.

You can use the Bulk Export feature to get all of the data points downloaded into spreadsheets, which you can then add to your Audit folder.

SiteBulb

Like the others, Site Bulb will do a comprehensive crawl of your website. The benefit of this tool is that it will give you more in-depth technical information than some of the other tools.

You’ll get an Audit Score, SEO Score, and Security Score. As you implement fixes, you’ll want to see these scores increasing over time.

Google Search Console

The Index Coverage report contains a treasure trove of data that you can use to implement the fixes that Google has discovered about your site.

In the details section, you’ll see a list of the errors, and if you click through to each report, they will include the list of pages affected by each issue.

Implementing technical SEO fixes

Once you have all of your CSV exports, you can create a list of all of the issues and go through them to remove duplicate reports created by the different tools.

Next, you can assign what department each fix belongs to and the level of priority. Some may need to be tackled by your developer, others by your content team, such as rewriting duplicate titles or improving descriptions with pages with low CTR.

Here’s what your list might look like:

Each project should include notes, observations, or details about how to implement the fix. 

Most websites will have dozens of issues, so the key here is to prioritize the issues and make sure that you are continuously fixing and improving your site’s performance each month.

E-A-T Audit

It’s important that your website reflects topical authority and relevance. E-A-T means:

  • Expertise: Are you an expert in your field? Are your authors authoritative?
  • Authoritativeness: Are you considered authoritative in your field by industry organizations? Do your social profiles, citations, social shares and link profile reflect this authoritativeness?
  • Trustworthiness: Can visitors trust that your website is secure and that their data is safe? Does your site have an SSL certificate, including privacy disclaimers, refund information, contact info and credentials?

Google has an entire team of Quality Raters that manually review websites to assess them based on these parameters. Google has even published the Quality Raters E-A-T guidelines for site owners to reference.

If your website is in a YMYL (Your Money, Your Life) niche, these factors are even more important as Google attempts to protect the public from misinformation.

Analytics audit

  • Is your Google Analytics code working properly?
  • Do you have the proper goals and funnels to fully understand how users navigate your site?
  • Are you importing data from your Google Ads and Search Console accounts to visualize all of your data in Google Analytics? 

BrainLabsDigital has created a Google Analytics audit checklist that will help you review your Google Analytics account. The accompanying article will give you a straightforward and strategic approach to ensuring your Google Analytics is set up properly.

Prioritizing technical SEO fixes

Make sure you prioritize continuously improving your on-page SEO. Depending on your site, you may have a list of a dozen or a few hundred fixes. Try and determine which fixes will impact the most pages to see a greater improvement from your efforts.

It can be discouraging to see a list with 85 different technical SEO improvements. The benefit is that, as you go through these improvements, you will start seeing movement in your rankings.  Over time, you’ll want to have very few, if any, errors show up in all of your crawling tools.

If your content is relevant, targeted and well developed, and you’re receiving new, quality links every month, these technical = optimizations will become the key differentiating factors for ranking better than your competitors.

The post What to look for in a technical SEO audit appeared first on Search Engine Land.



a message brought to you by Wayne Vass SEO

This article first appeared on: The post %%POSTLINK%% appeared first on %%BLOGLINK%%.
0 Comments

What are your secrets to overcoming marketing challenges? Take our survey

5/17/2022

0 Comments

 

Catching your prospect’s eye and moving them along the buyer’s journey has never been easy. Add the pandemic to the equation, and we know your job as a marketer has probably never been as challenging as it is today.

We invite you to take the marketing challenges survey so we can better understand what you’ve been challenged with the most and how you’ve overcome these obstacles. The survey results will help you see how your peers take on these challenges and prove ROI.

The first 100 people who fully complete the survey will be automatically entered in a drawing to win $250 to donate to a charity of your choice or a $250 Amazon gift card.

The post What are your secrets to overcoming marketing challenges? Take our survey appeared first on Search Engine Land.



a message brought to you by Wayne Vass SEO

This article first appeared on: The post %%POSTLINK%% appeared first on %%BLOGLINK%%.
0 Comments

20190304 SEL Daily Brief

5/17/2022

0 Comments

 

The post 20190304 SEL Daily Brief appeared first on Search Engine Land.



a message brought to you by Wayne Vass SEO

This article first appeared on: The post %%POSTLINK%% appeared first on %%BLOGLINK%%.
0 Comments

Highlights from #SMX Advanced 2018

5/17/2022

0 Comments

 

Another SMX Advanced is in the books! Last week’s show was a success, and we want to thank everyone who helped make it happen — speakers, sponsors, vendors, staff, organizers, and of course, our invaluable attendees. Right off the bat, we want to invite everyone — whether you were or were not at the conference — to check out our full archive of SMX Advanced 2018 speaker presentations on SlideShare.

Now, on to some of our favorite SMX Advanced moments.

Top 10 SMX Advanced 2018 highlights

1. The Meet & Greet

2. Keynotes

3. Learn with Google

4. Janes of Digital

#janesofdigital @bingads kicking off this evening at the Olympic Sculpture Park this evening during #SMX pic.twitter.com/XYwbiUVFXI

— SearchMarketingExpo (@smx) June 12, 2018

5. Matt Cutts https://ift.tt/sX8mkFh

6. Excel with Bing

7. The Search Engine Land Awards

8. The Expo Hall?

9. A beloved SMX staple, the Google AMA

10. Best In Show / #SMXInsights

Coverage from around the web

Next up, we wanted to share a roundup of some coverage from around the web, including recaps contributed by some of our own speakers. We’ll update this list as more coverage rolls out!

The post Highlights from #SMX Advanced 2018 appeared first on Search Engine Land.



a message brought to you by Wayne Vass SEO

This article first appeared on: The post %%POSTLINK%% appeared first on %%BLOGLINK%%.
0 Comments

SEL Daily Brief May 15 2019

5/17/2022

0 Comments

 

The post SEL Daily Brief – May 15 2019 appeared first on Search Engine Land.



a message brought to you by Wayne Vass SEO

This article first appeared on: The post %%POSTLINK%% appeared first on %%BLOGLINK%%.
0 Comments

Get your #SMXinsights here! Tastiest takeaways from SMX West 2018

5/17/2022

0 Comments

 

It seems like just yesterday we were kicking off SMX West, and now the show has come to a close! But dry those tears, the best moment of SMX live on in this incredible roundup of #SMXInsights shared over the past four days.

Whether you missed out on attending or want to revisit some of your favorite takeaways, this deck is yours to enjoy. Bookmark it for the future, share it with your social circles, and return to it again and again whenever you’re craving some fresh SEO and SEM inspiration… here’s a sample. Scroll down for the entire deck!

“Create content for every stage of the journey. 80% of the content is ‘I want to know’ [think FAQs]… via @BenuAggarwal #smx #smxinsights pic.twitter.com/exDe6xos0O

— Alex Ludwig (@imnotanattorney) March 14, 2018

“Voice search today has an error rate of only 6.3% – the same a human translator. Your customers are asking questions – answer them clearly for the sake of your organic strategy!”#SMXInsights #seo@EntrataSoftware pic.twitter.com/lAtQvbMIge

— Diogo Ordacowski (@ordacowski) March 13, 2018

Google is here to make sure we’re paying attention ? #SMXwest the sugar rush of #SMXInsights is about to hit ? pic.twitter.com/uTW9eZFW3L

— Holly Miller (@millertime_baby) March 13, 2018

“Technology is only as good as the humans who train it” #SMXInsights pic.twitter.com/gdHzybfTSD

— Sipra Roy (@SipraRoy) March 13, 2018

Like what you see? Check out our entire compilation of #SMXInsights here! (You can also check it out on Slideshare!)

Psst… Hungry for some more insights? Come to our next SMX event, SMX Advanced, June 11-13 in Seattle, WA! Join the best and brightest search marketers for an unbeatable learning and networking experience!

The post Get your #SMXinsights here! Tastiest takeaways from SMX West 2018 appeared first on Search Engine Land.



a message brought to you by Wayne Vass SEO

This article first appeared on: The post %%POSTLINK%% appeared first on %%BLOGLINK%%.
0 Comments
<<Previous

    Author

    Write something about yourself. No need to be fancy, just an overview.

    Archives

    November 2018
    October 2018
    September 2018
    August 2018
    July 2018
    June 2018
    May 2018
    April 2018
    March 2018
    February 2018
    January 2018
    December 2017
    November 2017
    October 2017
    September 2017
    August 2017
    July 2017
    June 2017
    May 2017
    April 2017
    March 2017
    February 2017
    January 2017
    December 2016
    November 2016
    October 2016
    September 2016
    August 2016
    July 2016
    June 2016
    May 2016
    April 2016
    March 2016
    February 2016
    January 2016

    Categories

    All

    RSS Feed

Powered by Create your own unique website with customizable templates.