February 3, 2016
·
7 min read

Content Quality Management starts from requirements

Kirill Soloviev
Startup founder and CEO

When thinking about content quality management and performance of global content, many experts tend to focus exclusively on evaluation, measurement, or assessment of quality. However, there is so much more to Content Quality Management and Content Quality Assurance than the act of checking alone!

Many of the processes essential for delivering high-impact, high-quality content in multiple languages actually have to happen before AND after any quality checks. If these processes are not in place, your organization might be wasting time-to-market and budgets on sub-optimal content and inefficient, costly QA practices. Here are 6 steps to avoid that, inspired by Six Sigma DMAIC approach:

  1. Define and share requirements
  2. (Produce content — author and/or localize)
  3. Measure quality
  4. Analyze results
  5. Improve both content AND requirements
  6. Control and repeat

Today, we’ll be focusing on the first item: establishing requirements for your content before you produce it so that you can tell “good” from “bad”, and sharing them across the entire global content supply chain. We’ll cover the rest in future blog posts, so stay tuned.

1. Define and share requirements

line underH5 mask

Why spend time on defining what “good quality” means for your content?

Most heated arguments about quality usually happen when people have very different pictures in their minds of what “good” quality is. To avoid this blunder ourselves, let’s first consider a few definitions of quality:

The key insight to take away from those definitions is that quality is always relative. We can only have a meaningful conversation about the quality of a work product in question (a piece of content, for instance) by comparing it with something else: pre-existing norms, requirements, standards, rules, examples, metrics, or even past experiences.

So agreeing on requirements for content is paramount to even start hoping to achieve quality. The problem, however, is that these requirements are so often communicated implicitly that we don’t even pause to think about it. Everyone surely knows what type of content I feel will work best for our audience, readers, and users. Right?

Wrong. As your content production team gets larger than 1 person, you’re in for a big surprise. By the way, that happens much sooner than you may realize — just imagine a VP or another corporate stakeholder making edits to content that contradict your “common sense”, or a freelance writer you hire to produce a blog post for you, only to discover you end up with something totally unusable, off-brand.

By the time you are localizing — even if it’s just into 2–3 languages — the amount of people on your extended content team that need to know what “good content” means to your org will have grown by a factor of 10. If you’re the one responsible for content quality management and for driving global content performance in your organization, it’s YOUR job to keep all of these people on the same page. And you have to do it every time, regardless of their location or company. Otherwise, consistency will always remain an elusive goal.

How to explicitly define requirements in content quality management

line underH5 mask

We’ve already discussed that each piece of content is created for a purpose. Communicating this purpose to the entire team is a good start for achieving quality. However, that alone is usually not enough. What else do we need?

Over the decades, the content industry has crystallized two very powerful ways to define, store, and share content requirements: Style Guides (or manuals of style) and Terminology Databases (or term bases). They are similar in the sense that both are a collection of different rules, instructions, and examples of how to create content (in one language or in several) that will be considered “good” by an organization in a given context (e.g. a specific content type, or a particular project).

In the world of Globalization, Internationalization, Localization, and Translation (GILT), Translation Memories (or TMs) are often used for capturing “good” content for future reference and reuse, and thus can be considered a specialized form of content requirements. Another form of requirements in localization is instructions inside Localization Kits (or LocKits), which usually focus on technical specs essential to delivering “good” localized content in a software app or technical documentation to its end users.

Instead of going into details of how those sources of requirements work, let’s rather consider what they typically consist of. Here’s one way to categorize those content requirements:

1.Formal wording rules

This type of rule prescripts a specific choice of words or sentences in narrowly defined contexts: “When talking about this, always phrase it like this”. For example:

Formal wording rules typically need human expertise to be validated when doing quality checks. However, sometimes automated methods may be called upon to assist or improve efficiency.

2. Formal technical rules

This type of rule mandates how to present and format your words and sentences. For example:

Formal technical rules can often be validated automatically as part of quality evaluation.

3. Informal rules

Informal rules almost always require human know-how to be validated during quality measurement. Due to inherent subjectivity, they also often need a person with authority to confirm the final judgment if arguments arise in the quality inspection process.

Too many vs. too few: some requirements are still implicit, and that’s OK

line underH5 mask

When you’re just starting out to document your content requirements, finding the right balance between capturing too many and capturing too few is key. After all, if you go all the way down to listing basic spelling and grammar rules for each of your languages, your list of requirements will be as long as a good book (or, rather, a stack of those).

And, even with software assisting you during the quality evaluation stage, it’s a huge effort to continuously maintain a large requirements database, train new team members (across organization boundaries) on each of those requirements, ensure that requirements are well adapted to each of the languages, and work through inevitable false positives that arise with any automatic validation process.

So how do we keep our content requirements lean and avoid non-value-added activities when evaluating content quality against those requirements?

  1. Design your requirements for specific personas. It’s not unreasonable to expect a certain level of expertise from anyone in your content supply chain, be it copywriters, technical writers, editors, subject matter experts, translators, reviewers, revisers, or proofreaders. This expertise may include a certain level of language proficiency, general industry domain knowledge, and professional experience (including knowledge of industry standards).
  2. Refer to pre-existing requirement sources (e.g. public style guides). Software engineers using Object-Oriented Programming techniques (OOP) leverage the power of inheritance extensively when writing their code to avoid duplicated effort and ensure the same behavior in different contexts. You can do the same with your content requirements, too, by referring to “master documents” for everything you don’t want to specify explicitly. Typical references include Chicago Manual of Style, European Commission Directorate-General for Translation’s English Style Guide, and Microsoft Style Guides (there’s a version for English content called Microsoft Manual of Style, as well as multiple versions for localized content in many different languages).
  3. Think of the worst outcome that may happen if you omit a requirement.Not all requirements have the same “quality” to them. That is, your audience (readers and users) might be more sensitive to certain aspects of quality when compared to others. So ask yourself: why do we really put this requirement in place, and what do we stand to gain or lose? What might happen to our content performance if our content producers interpret this aspect differently in every piece of content? Do we have specific data that proves that, or is it just a hunch? (or perhaps the loudest voice in the room?)

How do you currently define & manage content requirements in your organization, and how does your approach change across languages, content types, and departments? What key challenges do you face when trying to communicate your requirements for producing high-quality content to your team, peers, vendors, and stakeholders? How do you know which requirements are essential, and which should rather NOT be there? Share your experience in the comments section!

Learn More