InText Translation Forum 2011. Presentation by Valentyna Kozlova

Uploaded by InTexts on 02.06.2011

“Quality control at InText Translation Company” Valentina Kozlova May 21, 2011
Some of you have already felt results of my new role at the company,
some have already got proofreader’s corrections and comments,
and some have heard positive feedbacks from me.
It is true that quality of deliverables is essential for our company.
It is always a shame to me personally to send translation of mediocre quality to the client,
such a shame! Don’t want to do that!
I really want to improve quality; surely it is better to anyone.
It is very hard to come to an objective conclusion on what is а high-quality translation,
whether it is accurate word-for-word translation or a creative translation of a good style.
Everybody has different vision
and it is not my task here to come up with an ideal translation formula.
In my presentation I will try to outline the approach of our company to evaluation,
to the process of assigning translators to a project,
I will talk about quality assurance and its development history
as well as about the plans of our company concerning the issue.
We use the term Quality Assurance.
The same procedures can also be called Quality Control, Quality Management.
QA – is a set notion for us.
Everyone agrees that high-quality is a must for every market;
our clients expect quality when starting collaboration without even mentioning it.
Undoubtedly, high quality of our product gives us more competitiveness,
more opportunities, more work, higher profits.
The question is how do we reach this “heaven on earth”?
From our perspective, the company’s main task is
to place the job thoughtfully,
to hit the target when choosing both translator and proofreader,
so at this stage it is important
to distinguish the best translator with the given language and subject field.
How do we do that?
According to what criteria?
How to make it as objective as possible?
I will at least try to answer these questions in this presentation.
How do we measure quality?
If you were given proofreading tasks or received our feedback,
you probably noticed that translation
is being checked against 8 criteria.
I will be more specific about those later on.
To make the evaluation more objective,
we make an average based on those 8 marks.
Proofreaders that have the authority give us feedbacks on translation,
making the evaluation more and more unbiased with each project.
We are cautious about giving translators the authority
to evaluate others and now we try to work out a special pool of best resources for that.
Different criteria have different weight,
so the mark for layout mistakes is less important
than the mark for accurate translation.
What influences the quality?
In my opinion, among the factors are of course,
the translator’s and proofreader’s knowledge on the given subject,
language skills (both source and target),
turnaround time – of course,
it is very hard to make a nice translation when working under pressure of meeting the deadline.
Good CAT tool skills are important:
typing speed, experience in working with different tools
and knowing their peculiarities, strong and weak sides.
It is also important to search the Internet fast:
to know what exactly to look for and where to find it.
Quality of TM that we get from our client
and give to you means a lot as well.
You should know that we are being checked all the time.
Many of our clients demand a filled in checklist
that we deliver with the ready documents each time.
They are similar to ours,
sometimes project managers fill them in,
sometimes our in-house proofreaders or me.
Some clients ask their proofreaders, natives in Russian to check our translations,
all our clients that work with CAT tools
use automatic check tools
X-bench, built-in QA modules in TagEditor and other tools.
It is a standard.
Here’s an example of a checklist for one of the projects.
Every time my colleagues Tatyana or Irina print it,
read it, put checkmarks, sign it,
scan it and send it over to the client.
Very time consuming…
You might have seen we do have a kind of checklist for you as well.
You can find it when you are working with our web interface for translators.
Can someone remember what it says?
It’s not so clear on the slide so you don’t cheat…
You have these instructions every time you confirm the job
and you have them in front of you every time you deliver.
It’s already like desktop wallpaper.
You got so used to it that no one knows what it says
But we have a plan.
Soon we will update the web interface for translators,
so that things will get easier to find, easier to use
and the checklist will contain specific instructions for each project.
Project manager will prepare it for the translator and proofreader,
it will be suitable to use, follow and fill in.
It will be for now one of the most important things
to keep controlling quality of the deliverables.
A short guidance on how to make an automatic translation check before you deliver.
Every manager in our company is obliged to check translated file
F7, F8, XBench, before delivery to the client.
In the TagEditor, it is important that tags were checked,
and then the file has to be checked with Xbench or QA Distiller
We know that target language should be applied to the entire text in Word
for successful spell check further on.
If anyone has any questions and perhaps some of you do,
we are thinking of compiling this information into a webinar,
so that we can show everything we know, share it with you,
make things clear and easy and use the information in the future
Now a little bit of history.
Since 2003, we had the following system of evaluation:
marks 1-5, 5 being the highest,
the marks were given after the test tasks (pass/fail) and after jobs done.
A very generic mark, no explications,
nothing to reflect strong and weak sides of the translator’s job.
Eventually, the evaluation form improved and until the middle of 2010 we used this form.
The criteria were almost the same,
they have not changed until today,
it used to be Word file,
the proofreader filled it in, indicating the mark, his comments, and the job number.
Now things have improved even more,
we are using Excel file, as you know.
We have made space for proofreaders’ comments to get the negative energy out of the system,
to leave examples of the most outrageous errors.
Thus project manager sees if the job was done particularly awful
and make a decision to send a feedback to the translator.
Have any of you had the opportunity to use such form?
Is it suitable?
OK, alright, I’ll be waiting.
Yes, it takes time,
I know it myself,
and sometimes I fill in such forms when preparing feedbacks for translators.
and this desire to make people see, it is so strong.
And here we have it, for sharing.
It is especially useful when talking about preliminary control stage
when you have only a part of the job to check,
and you prepare the comments for the translator to note.
I would be grateful if you could share the ideas and your thoughts,
of course there are many things to improve.
In the future, we plan to develop a system operating with figures only,
that would take into account the volume of translated text
with each type of errors, repetition of errors having own weight,
than the figures will be summed up,
well an algorithm is required,
we really would need a mathematician here,
and linguists too, to indicate the weight of each type of error,
it is really a long process,
we are developing this system now and we do plan to have it implemented
so that we could evaluate correctly without further argument,
that would accurately note all little details.
Well, I think you, the proofreaders, want to ask:
OK, I have made the time to fill this in,
copied all those silly mistakes,
highlighted what was needed,
but what goes next?
Where does it go?
What if it will just lie there in the Received folder,
nobody will not even take a look at it…
Not true!
The filled in form goes from the proofreader to the project manager,
the project manager puts all the marks into our TMS (translation management system).
The system has all the jobs registered, handled, closed, evaluated.
It helps to manage jobs on different levels.
On the level of project management, there’s a build-in data sheet for translator’s evaluation.
Project manager or most recently, me,
we put all the comments and marks from the proofreader in the sheet
While doing that, we try to avoid hard words, delete bad words,
make the comments neutral,
sometimes correct small misprints and discrepancies so that nobody gets offended or think bad of us or of the proofreader
and we delete all unnecessary comments leaving only the gist of every comment.
Here you see we have 8 marks, so we get an average value,
that you can see in the Overall Rating cell right there.
This is the project I was managing, if I’m not mistaken, there was some half a page,
a cute text about toys for dogs, very simple, I only had to make a few corrections
and you can see the rating over there.
It will be added to the general rating of the translator.
Now, what happens next...
We’re preparing and sending a feedback to the translator,
hoping that he or she sees the errors, and doesn’t make the same mistakes in the future.
A little bit naïve, but we’re doing our best.
There’s a special column where the translator is asked to put his/her comments.
Then we eagerly wait for the translator to reply, see the reaction and the comments,
I have all the feedbacks I sent written, so that I remember exactly what feedbacks I sent.
It is very interesting to analyze the reaction;
we take notes if perhaps the proofreader was wrong in the corrections.
There are cases when translator is sending back very well reasoned disagreement comments and I think,
now I should talk it over with the proofreader, not the translator.
Sometimes we see that there are corrections you can’t really argue with,
wrong punctuation or obvious spelling mistakes…
but if the translator still tries to disagree, we are making our notes,
analizing how diligent, flexible, educable the translator is
See if he or she keeps making the same mistakes,
if he or she is actually a sane person, yes, sometimes it happens…
We are trying to be very cautious with the translators
that are always disagreeing with our corrections or ignoring obvious mistakes,
saying that the mistake was not theirs, it was in the TM…
So in general, we are very carefully looking at these feedbacks all the time.
Now, on the previous slide as I showed you,
we put all the marks into our TMS,
all that goes to the Supplier’s profile, as I mentioned,
until this event you looked like this to us,
grayish background,
your personal data, registration date, comments, marks,
that’s it – this is your mirror.
Like in a browser, you see tabs; the first one is Details,
address, telephone, your name, things like that.
Moving on to the second tab, Additional Info – an important one,
it has your registration date, the date of update, you web forms login information if you use the portal,
and this table contains all the marks and comments for each of the criteria and the general comments for the jobs done in the past,
and here’s the overall rating mark.
Right here you can see all the comments ever inserted,
Lyudmila Ivanovna, here’s yours.
So when project manager or I are browsing the supplier’s base to select the perfect one,
we stop on every comment, trying to predict what happens if we pick this or that translator.
Here there are some additional comments that were put separately,
so we have all of your comments and your marks right here, we do read them, we do pay attention, we do care,
it’s very important that we keep up with this process of evaluation.
Now, here’s a profile of a negligent translator,
none are present here, we are trying to avoid such freelancers.
As you see, if the average mark is less than 4, they become red, as a signal to project manager: danger!
By the way, Vitaliy, here is your comment below:
the translator should be more careful when reading the text.
Sometimes with the new ones, we see that it is the 10th job now, and the marks are not high.
If the supplier is not educable, our feedbacks are bringing no result,
we can sometimes use a kind of penalty, to see if it works.
To make a decision to stop working with a supplier we make a round table and discuss the situation with our colleagues,
top managers and proofreaders, it is always a hard decision,
because of course every person is unique and vulnerable,
we always try to bring the best in people, perhaps try different subject field, something simpler.
But if that doesn’t help either, we are shifting the profile into “Not in Use" part of the database
and the translator can no longer be assigned to projects.
We do have some,
yes, 3,000 actually in our database,
but let’s not talk about the sad statistics.
Here on this slide you can see the weighting of each error as it is now.
You can see each criterion is weighted differently in percentage.
Maybe you have your own opinion on this,
perhaps you can suggest to change the weighting in some way, any comment is valuable,
please bring it on, so that we can discuss it.
If we are talking about additions and оmissions, that matters, not in terms of style,
but something essential,
like you missed a figure here or there,
or a word that alters the sense, something critical.
Ok, moving on, it’s going to thrill you, I promise, let’s see…
Now, the next tab is…
it… oh no it’s ok. Sorry for being so panicking so quickly,
Languages and subject areas, a very important tab,
here we can find all the subjects that the supplier has ever been working with.
Some of them may be confirmed some of them maybe rejected for this particular translator.
Where do they come from?
First of all every translator begins with test translations in particular subject,
those subject areas appear here.
If the test was passed, we put “Confirmed”.
If the translator declared a particular subject but did not do the test,
or we don’t have tests on this subject at our disposal, we indicate it in this table as “Declared”.
Further on, if jobs on this subject bring bad feedback, the translations were weak,
we reconsider the status and put “Rejected”.
It also happens that translator does not declare, let’s say ecology,
but project manager calls him: Alex, here we have ecology here, could you help us out?
Fine, I’ll take it. In this case TMS gives it “Auto” status.
The next column as you can see is Self-rating.
When registering, translator puts marks to show how they thing they mastered the subject.
The mark is there for project managers to pay attention to it as well.
The forth column indicates the average rating in accuracy of translation for jobs completed by the translator in particular subject area.
So in this case the translator has done Medical projects with the average accuracy of 4
and translations on general subjects with accuracy of 3.
These marks are also marked with bright colors, texts on tourism – 2,
so this subject area will be rejected for the translator right automatically.
The fifth column indicates the number of jobs completed on particular subject field, here, fifteen,
thirteen, and sixteen.
Thus we can see actual experience of the translator in a particular subject field.
The last column indicates the number of jobs that were evaluated.
So we can see if the mark is trustworthy, if 25 jobs were evaluated,
it is, but if there’s just one, oh well, maybe next time the translator will make it better.
All this gives us feeble but still a chance to see things objectively,
when choosing the supplier for another project.
Below you can see, we indicate what language is native
To some it is Ukrainian, to some it is Russian; there are foreign languages when we work with freelancers from abroad.
A little lower, we indicate the language pairs the supplier works with,
with status of “Declared”, “Confirmed” or “Rejected”, depending on whether the translator did or did not any tests or jobs in the language pair.
As you see this information is constantly available, we never delete anything from our database,
so that we could see the history of collaboration and did not have the temptation to make one and the same mistake twice.
So here the system is on a par with the evaluation of skills with subject areas.
If we keep on exploring the supplier’s profile, the next tab is price list…
nothing interesting there, price and method of payment agreed with the translator.
The next tab reflects the translator’s CV.
We review it carefully not to miss anythng relevant; something directly or indirectly showing the translator’s experience
We try to highlight all particularly useful places that show the things the translator is good or bad at to pay attention,
trying to make it as more useful as possible.
The next tab, very interesting, is called Tools.
Shows software which the translator is working with.
The software is either declared by the translator when we start working,
or we provide the software available and add them to the list of tools.
Each tool also is assigned “Declared” or “Confirmed” status.
For rare tools like there, we sometimes skip any status.
You can find all this information on your web forms,
so you can yourself change, add, remove and update the tools skills info.
Now, the last tab, entitled Tasks and Availability.
A very interesting one, this is where we can see all the jobs that translator is or was doing in the past.
This history is available for all project managers.
Here are the columns Start Date and Deadline, then Return Date –
this is the date when a project manager manually closed task for the translator.
Next we indicate the kind of task, whether it is translation or proofreading,
and then columns indicating the name of the client, the name of the project, job number,
language pair and the automatically estimated volume of the job in pages.
About the colors: green indicates current jobs,
blue – planned jobs, grey, or now you see them as white I guess are completed jobs
and the red color indicates that the job is late.
It does not necessarily mean that the translator is late with delivery;
it indicates that project manager did not change the status of the job to Completed.
Project manager can use any of the parameter to sort jobs:
task (translation or proofreading), client name, project name, and language pair.
It helps to accurately see the experience of the translator in detail.
Below there’s the availability table.
instance, I call the translator: Alex, hi, could you translate around 5 pages for tomorrow,
and he says: Valentina, sorry, I won’t be available till Saturday.
How do we make the rest of the team know that Alex is busy,
moreover, now, that we have several office buildings?
So here in this table I indicate that starting from here and until here,
the dates, then the reason, like vacation…, there’s no hang-over as an option here.
And there it shows who put the comment in the database and when.
Of course it is far from being ideal, because, still, you have to make a call to find out...
OK, so how do we look through the data to find the right supplier?
Here we have this job from French into Russian;
we have this subject, using this tool. How do we search?
Here is the search tool we have in our TMS.
This is where the HR manager or project manager can search for suppliers.
We set the language pair, set the kind of task, subject area, the tool,
mother tongue, but it is not crucial since most of us speak both Russian and Ukrainian.
Then there are some interesting things below, like sort by rating.
So what we see next is a list of translators that suit the indicated parameters.
We look through from the top of the list and down, we check current situation of the translator,
whether he is busy with another job, perhaps we can still talk him into squeezing a page or two.
And we are moving from the awesome suppliers to not so cool ones.
So of course the rating of the translator greatly influences our choice,
so if in the past you put good marks to a translator you have been proofreading,
we will offer him or her more jobs,
if not – the chances of that are getting smaller.
The next issue is how to choose a proofreader.
is a very hard topic to discuss; I can hardly keep myself from crying.
One thing we all agree is that proofreader has to be more experienced than translator in the given area,
so that he like guru could guide the translator and coordinate the translation process.
We are doing our best to assign only top of the top translators,
exactly the ones on those awesome places to do the proofreading tasks.
So if project manager offers you a text for proofreading it is not because you would be a scapegoat,
but because we trust you the most.
Of course we understand that to be a good translator doesn’t mean to be a good proofreader.
We are all unique, some do not quite understand the principles of proofreader’s job,
but we are trying to separate the ones of our freelancers that are apt for this kind of task.
Now let’s talk a little bit about penalty fines.
We actually apply them quite rarely.
As far as I know, fines in our company are applied in 30% of cases where they are applicable.
It is when the translation is actually a mess.
In most cases, we give people a chance to get better.
Fines are applied uniquely, we take into consideration a lot of things, it usually is 10 to 20 % of the payment for the job.
It is more of a psychological thing rather than a financial lever.
The decision to apply fine is always a team decision, so Kirill, Stanislav, proofreader and me sit down
and think carefully if we really need to apply the fine, but when it’s needed we do it.
We do not apply fines when working with new translators, because we are aware that it takes time to learn our requirements,
so we learn, we call, we talk, we give feedback,
but try not to frighten, just like you said, what if the translator just did not get it yet.
Well, in general, I think we are not that scary, really.
Now, about good things, rewarding.
We are gradually implementing rewarding as a standard and usual practice.
You agree with me, it’s not all fair when you are making more efforts than expected
on fixing an absolutely low-quality translation.
You are spending 5 times more than usually with normal project.
In these cases we will gladly give the money we fine from the translator to the proofreader of the job.
We study the files carefully, check the mistakes the translator made,
we note that proofreader must have spent much more time on this,
the proofreader provides us with examples of those awful mistakes, additionally we compare the files ourselves,
check the corrections the errors and based on that we evaluate the quality
and then make a decision to fine the translator and transfer those money to proofreader.
But actually, it is a rare thing, because I don’t really recall such really awful translations lately,
but of course if this happens we are trying to be fair and compensate for the proofreaders work and stress.
Also, if we notice that the translator is stably giving us translations of good quality,
he is in the top, that he works with big volumes, does his best, we increase the rates of the translator to show our appreciation and as an incentive.
On the contrary, if translator has unreasonably high prices and low rating,
we usually try to lower the rates.
We send feedback, discuss the errors and try to come to an agreement to lower the price.
Yes, I agree, Svetlana…
No money can calm the nerves…
OK, so it was just a short digression to talk about the pricing policy.
What difficulties do we have..?
I agree with you that the time for proofreading is always limited and often little.
It really is hard to accomplish the task with quality, sticking to all instructions,
it is hard to fill in the form we give you, with the examples, comments, marks, and it does require time.
So you think I will complete the form later when I have the time, but it is the same later on,
extra time, and eventually you forget about the project and the form.
So, not good.
Then, we work with the TMS and the form manually, putting the marks, the comments,
so there can be mistakes here, put a comment in the wrong line, putting the wrong mark,
human factor, can’t really help it. It is among the difficulties as well.
And then, there’s lack of personal communication.
You have this table, all filled up, so it is not convenient to send ICQ or Skype messages.
As QA manager, if apart from sending the form as a feedback,
it will be too much to write additional letter with any comments, and so, personal communication is less, it’s true.
Yet, we have our new version of web forms to be implemented really soon, right Alexander?
That will smooth these negative moments.
We have a one year plan.
It includes creating new web forms, like they say, faster, higher, and stronger.
The evaluation table for proofreaders will be integrated,
so when you deliver the job, you will simply need to open a window,
there you put the marks, your comments, no additional files, right?
Yes, so it will be faster and more convenient, I really hope so.
Again, as I said, there will be a separate checklist for every project, in few words, more exact, more,
how do you call it?.. Like a pinpoint.
And the most important, there will be a kind of a miniforum, kind of chat,
to exchange information between translator, proofreader, project manager, QA manager.
You know, to make it better, this information will all be in store, so in case of any queries, we can always look up the reason of the error or disagreement grows.
So, let’s hope we will launch this soon.
I would love to get feedback from you on this.
Perhaps you have a different vision, something we don’t quite see; maybe something else needs to be fixed and tuned.
I hope you have something to tell us, 24/7, you call me in the middle of the night, I will be happy to hear your ideas.
You have my mobile in the leaflet.
So I’m waiting genius ideas from you, anything improving the quality,
because I’m so young, graduated from a University not so long ago.
Really feel like doing something useful, but you know, one has to have experience, not just enthusiasm.
So the experience you got will help us so that it is fast and convenient for you, but also useful for our company.
So thank you very much for your attention. I will be glad to discuss it with you anytime.
InText Translation Company