shape
carat
color
clarity

Great News for Consumers in the announcement concerning Sarine and GCAL

Our guarantee states that we will pay the difference (in retail dollars) of the grade assigned vs the actual grade, so our liability is not limited to the cost of the certificate.

I very much appreciate all you bring to the table, and wish GCAL all the success possible.

But this point sticks.
Who determines "the actual grade"?

The issue I have is that there's an implication that other labs don't guarantee the grade due to inadequacies, as opposed to the fact a diamond's grade can't be effectively guaranteed. This is due to the nature of diamond grading.
 
I very much appreciate all you bring to the table, and wish GCAL all the success possible.

But this point sticks.
Who determines "the actual grade"?

The issue I have is that there's an implication that other labs don't guarantee the grade due to inadequacies, as opposed to the fact a diamond's grade can't be effectively guaranteed. This is due to the nature of diamond grading.

Yeah, the guarantee posted upstream seems almost silly. They guarantee that if you send the diamond back to them and they give it a different grade, they'll pay you a bunch of money. That would not reassure me one little bit. "We said this was a magical princess. If you think you've discovered that it's actually a frog, you can send it back to us. Our graders will look at it again and tell you again that it's a magical princess. If for some reason they change their minds and say it's a frog, we will pay you lots of money." Uh huh.

As Dave points out, who says whether something is a magical princess or a frog? What is the official, bright-line, everybody-agrees, nobody-can-dispute definition of a princess? If it's just "This is our opinion," then what's the point of their guarantee? They're guaranteeing that their opinion is their opinion. Good luck getting them to change their mind.
 
Companies are judged by both what they do and how they present themselves. And for every consumer who posts - tens, hundreds, perhaps thousands are lurking, reading, forming their own opinions.

And some of us also happen to have day jobs in technology and are intimately familiar with AI/ML model development and training.

More substance and less marketing, please. I would appreciate a clear answer to the question Rockdiamond, glitterata, and others have posed multiple times regarding certification guarantee liability.
 
Companies are judged by both what they do and how they present themselves. And for every consumer who posts - tens, hundreds, perhaps thousands are lurking, reading, forming their own opinions.

And some of us also happen to have day jobs in technology and are intimately familiar with AI/ML model development and training.

More substance and less marketing, please. I would appreciate a clear answer to the question Rockdiamond, glitterata, and others have posed multiple times.

Not to mention retailers, wholesalers, diamond cut manufacturers and lab folk.
 
The guarantee has been in place since 2007. This is not a new guarantee. Yes, the diamond needs to be resubmitted to GCAL, for the guarantee review process to begin. We have made mistakes in the past, and have paid on those guarantee challenges. If another diamond grading lab were to stand behind their work with a guarantee, we could consider a revision to the review process; however, if no other lab stands behind their work, how could we subject our grading to be reviewed solely by them?

The grading guarantee is very serious and guides how we operate the lab.

It is quite well known that you can submit I1’s to certain labs and get SI2’s, and J’s and K’s and get I’s

That doesn’t happen with GCAL because (1) it is unethical and immoral (2) because it would trigger a massive financial risk to our company

The largest coin grading company in the world has a similar guarantee

https://www.pcgs.com/guarantee

One of the largest card grading companies in the world has a similar guarantee

https://www.psacard.com/about/financialguarantee
 
It is quite well known that you can submit I1’s to certain labs and get SI2’s, and J’s and K’s and get I’s

That doesn’t happen with GCAL because (1) it is unethical and immoral (2) because it would trigger a massive financial risk to our company

Interesting that you’re hanging your hat on this.
In my opinion, it’s unethical to imply that stones on the borderline of grades don’t exist. There are stones that GIA (or whatever lab) might call SI2 one day and I1 the next day. And they could be right in both cases because there’s no ”hard line” separating SI2 and I1. Or D to E. There are judgment calls.
The way you’re framing this, GIA (or whoever) is unethically grading because they don’t guarantee their grades and it’s exactly the opposite. They don’t guarantee the grades because it’s not ethical to guarantee a subjective measure such as color and clarity of a diamond.
 
Angelo- I’m sorry if it seems like I’m picking on you or GCAL. We proudly offer diamonds with GCAL reports. I have found the grading to be accurate.
if I was speaking to a client and they asked about the guarantee I’d discuss my opinion and viewpoint on the matter.
The biggest issue for me in this thread is that we have you and other tradespeople saying directly that other labs don’t offer a guarantee because of nefarious reasons.
IMO casting doubts about competitors isn’t a good look. Especially when your reasoning is not on solid ground
Offer a guarantee- that’s your prerogative.
But once it’s getting discussed publicly it will draw criticism. And once the guarantee is used to deride other labs you’ll likely find even more vociferous objections….such as this thread.
 
I hope the webinar(s) dive deeper into the nuances of how the AI will determine borderline calls. In simple terms, I envision advanced technology relating everything back to a numerical value and whatever number the programmer codes would then define as a select color or clarity.

If the AI detected 999 and 1000 was the cut off point, would that conclusively make the AI grade better? It seems practicable the machine should be able to respond the same over and over as it’s detecting a number (assuming no machine failure). Which raises another point about the accuracy and precision of the machine and the fail rate of the individual parts and machine as a whole.

Humans will occasionally fail to make the same borderline call but machines will also have some failure rate as well, even if it’s better (less) than humans because of the aforementioned accuracy and part/machine failure or wear rates.

What is the current protocol at GCAL now using human based grading? I see it’s by a 2-3 person team and consensus. But let’s dive deeper. Is it majority vote wins? The manager wins? The lower of the two grades in an effort to be conservative? The higher of the two to provide benefit to the paying client?

And do you see this changing in the future? I could see that future color/clarity grading be further enhanced by stating a degree of certainty on the report? Not so much that it’s 99% accurate human/machine grading but more in relation that it’s 999 vs 1000. In a sense this already occurs by consumers when they talk high G, low G, etc. The report will be G but there is recognition of different range/strength for a G color.

To provide some possible alternatives, would a future report read as follows:

1. G color, low
2. G/H color
3. G color, 999 on scale of 990-1000 (envisioning graphic identifying a bar/tick between F and H to show how that numerical value falls).

I prefer 3 myself as it leaves less to assumption, with the caveat there is a standard amongst all labs, very precise and low failure rate on machinery doing the grading, etc.

Just my 2 cents but this would have a more meaningful effect to the consumer than a guarantee that can be only validated by the original author under limited time and use conditions. In effect it would acknowledge there are borderlines and how that particularly stone stacks up to others. I suspect retailers would hate this as it means those “barely G’s” would be less valuable.
 
I hope the webinar(s) dive deeper into the nuances of how the AI will determine borderline calls. In simple terms, I envision advanced technology relating everything back to a numerical value and whatever number the programmer codes would then define as a select color or clarity.

If the AI detected 999 and 1000 was the cut off point, would that conclusively make the AI grade better? It seems practicable the machine should be able to respond the same over and over as it’s detecting a number (assuming no machine failure). Which raises another point about the accuracy and precision of the machine and the fail rate of the individual parts and machine as a whole.

Humans will occasionally fail to make the same borderline call but machines will also have some failure rate as well, even if it’s better (less) than humans because of the aforementioned accuracy and part/machine failure or wear rates.

What is the current protocol at GCAL now using human based grading? I see it’s by a 2-3 person team and consensus. But let’s dive deeper. Is it majority vote wins? The manager wins? The lower of the two grades in an effort to be conservative? The higher of the two to provide benefit to the paying client?

And do you see this changing in the future? I could see that future color/clarity grading be further enhanced by stating a degree of certainty on the report? Not so much that it’s 99% accurate human/machine grading but more in relation that it’s 999 vs 1000. In a sense this already occurs by consumers when they talk high G, low G, etc. The report will be G but there is recognition of different range/strength for a G color.

To provide some possible alternatives, would a future report read as follows:

1. G color, low
2. G/H color
3. G color, 999 on scale of 990-1000 (envisioning graphic identifying a bar/tick between F and H to show how that numerical value falls).

I prefer 3 myself as it leaves less to assumption, with the caveat there is a standard amongst all labs, very precise and low failure rate on machinery doing the grading, etc.

Just my 2 cents but this would have a more meaningful effect to the consumer than a guarantee that can be only validated by the original author under limited time and use conditions. In effect it would acknowledge there are borderlines and how that particularly stone stacks up to others. I suspect retailers would hate this as it means those “barely G’s” would be less valuable.

All sounds peaches and cream Sledge, but there are so many flaws in the systems that make everything irrelevant.
1. Humans have 2 eyes, not the cyclops effect used by all these approaches we are discussing.
2. Cut affects face up color but e measure / assess through the pavilion.
3. size affects clarity
4. Clarity is a size of inclusion system from Flawless down. From I3 up its more about % coverage. Where the two meet at VS/SI its a s##t fight.
5. color and intensity of fluorescence and the cut quality ray path lengths make a big difference.
6.
7. 8. 9. etc
 
I would genuinely like to understand WHO gets to decide if the grading was inaccurate or not.

If a stone comes back from GCAL at G/SI1 (for example, and GIA says H/SI1 or G/SI2, does that trigger the money back guarantee? Or does GCAL themselves have to agree that they messed up the grading?

EDIT. I see that @GCAL-Angelo said a stone needs to be re-submitted to GCAL themselves. Do you happen to have any stats about how often GCAL has admitted they made an error?
 
Individual color and clarity grades are not distinct entities, but rather small ranges on a spectrum. There is inherent subjectivity in making calls that are on the borderline. Traditional color grading is done by humans using master sets. Each of those masters sets are necessarily slightly different from one another. Clarity grades are determined largely by how easy or difficult inclusions are to see, considering size, number and relief. Considering that practically every diamond has a different combination of those things, borderline clarity calls are even more prone to subjectivity.

But I also feel like machine grading is the realm of the future and a very worthwhile goal. The transition is going to be controversial. But once it is standardized, it will be highly repeatable and predictable which is really the goal. (It is why master stones have always been used in labs, rather than just a "trained eye"). I feel like GIA has done a commendable job of quietly increasing the number of diamonds they grade solely by machine, and avoiding any major outcry over the practice.

To me it is a little like the controversy surrounding self-driving cars. Will they ever get to a point where they are flawless and there is never a glitch or accident? No. But will the highways become much, much safer places as a result of the automation? Most definitely, even with today's relatively young state of the art.
 
AI and ML are used - meaningfully - in nearly every industry. Many of those verticals don’t necessarily advertise their usage to the general public, though!!

That disclosure of what’s done by humans, what’s done by machine - it’s kind of a double-edged sword. On the one hand the divulgence is welcome. Especially in an industry like this one, that has such a long history of opacity. On the other hand, though, consumers are much more tech-savvy now than they’ve ever been before, and the tech industry itself has redefined words like “transparency”… Which makes nomenclature mismatches that much more likely, and those mismatches engender consumer confusion and mistrust.

To me, “transparency” doesn’t mean more information on a report (or certificate!) if that information was derived/evaluated/adjudicated/whatever via proprietary black-box analyses. In fact - calling that sort of information “transparency” makes me immediately question the author’s integrity. Why? Because I work in technology, and in technology in 2022/2023 “transparency” means ditching the secretive “unique”, “proprietary”, “special sauce” software.

Phone cameras failing to recognize when Asian individuals’ eyes are open. Photo filters labelling black people apes. Chatbots turning into racist mysoginists within hours of release. Faucets refusing to turn on for non-light-skinned people. Resume screening AI learning to eliminate resumes submitted by women. Home assistants refusing to acknowledge accents and dialects. Vehicles mischaracterizing obstacles in their paths. Vehicles switching to manual direction immediately before crashing. The technology behemoths of AI and ML have colourful histories of disaster. There is simply no possibility that gemstone grading is/will be somehow exempted.

Now, in all of those instances I referenced above, the fundamental issues were insufficiently-heterogenous datasets and training skew. Sarine clearly has access to an incredible volume of data, from sheer stone throughput. Volume is the first step. Curation, inspection, identification of assumptions, testing of those assumptions… In every single other major industry that *advertises* use of AI and ML, there are *public* resources describing these steps. But so far I’ve seen nothing at all from the gemstone industry. And believe me, I’ve looked. Nothing that doesn’t boil down to “AI Good, Scale Good, Cloud Good, trust us”.

But then again, in all those other industries the real transparency (by technologists’ definition) came only after some public embarrassments. I would have expected that the diamond industry would learn from its predecessors but clearly… Based on GIA’s bungled rollout, Sarine’s lengthy development history that has thus far yielded no consumer-facing mindshare, now apparently GCAL’s marketing… That is not the case. It’s unfortunate.
 
Last edited:
Hi, Mr GCal,

I am just a consumer, but I like your idea of a guarantee. Who should determine whether or not the diamond has been incorrectly graded but the company who issued the guarantee, of course. If you have a problem with a GM car, you don't take it to Ford to confirm your guarantee. If an appraiser finds an error, send it back to GCAL. Sounds reasonable to me.

It would seem to me that a program could be written to spit out borderline grades for human eyes to examine. Use the MasterSet's then.

I don't see this as a problem. I see it as value add. But, I'm just a consumer.

Good Luck Angelo!

Annette
 
ITraditional color grading is done by humans using master sets. Each of those masters sets are necessarily slightly different from one another. Clarity grades are determined largely by how easy or difficult inclusions are to see, considering size, number and relief.
But I also feel like machine grading is the realm of the future and a very worthwhile goal. The transition is going to be controversial. But once it is standardized, it will be highly repeatable and predictable which is really the goal. (It is why master stones have always been used in labs, rather than just a "trained eye"). I feel like GIA has done a commendable job of quietly increasing the number of diamonds they grade solely by machine, and avoiding any major outcry over the practice.

To me it is a little like the controversy surrounding self-driving cars. Will they ever get to a point where they are flawless and there is never a glitch or accident? No. But will the highways become much, much safer places as a result of the automation? Most definitely, even with today's relatively young state of the art.
Dear dear Bryan, You seem not to understand the information I keep posting that the future you refer to has been here for a very long time.
GIA as I have explained grade at least to 2ct sizes in rounds with an instrument and started doing so 21 years ago. Clarity under 1ct they have plainly stated 2 years ago is graded using the IBM AI.
And clarity Bryan is as i have explained no where near as simple as "Clarity grades are determined largely by how easy or difficult inclusions are to see, considering size, number and relief. "

from: https://www.gia.edu/doc/Coloring-Grading-D-to-Z-Diamonds-at-the-GIA-Laboratory.pdf
"This approach was followed for the next year until the device’s ability to perform accurate color grading had been validated. In 2001, following its application in the grading of tens of thousands of diamonds, we integrated the device as a “valid” opinion in the grading process, with visual agreement by one or more graders required to finalize the color grade of a particular diamond. Since then, the vast majority of diamonds passing through the laboratory have been graded by combining visual observation with instrumental color measurement. Note that this instrument is for the laboratory’s internal use and is not available commercially."

Further to that I have explained that at least round diamond symmetry is graded using Helium scans.
Now that GIA has acquired AGS technology we should expect they too will come out with a instrumental grade for fancy shapes.
Bring it all on.
But I do not trust Sarine!
 
Clarity grades are determined largely by how easy or difficult inclusions are to see, considering size, number and relief.

I believe we've discussed this in the past...but it's an important distinction. Clarity is graded based on the presence of imperfection- as opposed to the visibility of said imperfection(s).
That's why we can have a 1ct VS2, with an eye visible, but tiny carbon spot, dead center, graded (correctly) VS2....and an I1 with a feather in the pavilion that is completely invisible face up.

Maybe this aspect should change- along with grading "colorless" diamonds through the pavilion.
But based on GIA grading as it stands now.......
 
Last edited:
I believe we've discussed this in the past...but it's an important distinction. Clarity is graded based on the presence of imperfection- as opposed to the visibility of said imperfection(s).
That's why we can have a 1ct VS2, with an eye visible, but tiny carbon spot, dead center, graded (correctly) VS2....and an I1 with a feather in the pavilion that is completely invisible face up.

Maybe this aspect should change- along with grading "colorless" diamonds through the pavilion.
But based on GIA grading as it stands now.......
There is an argument to be had that if you had as to if a particular diamond is "Rare White" and "Loupe Clean" then HRD is the official arbiter.
If you wish to call a diamond G and Flawless then GIA is the official arbiter.
 
I think this is a natural partnership. I've been a friend of the Palmeris and client of both GCAL and Sarine for decades, and I see them as both very different and extremely compatible companies. Sarine is an equipment company, while GCAL is a service company. Sarine sells tools. GCAL sells data. Both are excellent at what they do. It's a perfect combination. They're both heavily vested in the success of the other and the industry badly needs both. They bring completely different skill sets.
 
GET 3 FREE HCA RESULTS JOIN THE FORUM. ASK FOR HELP
Top