Are you relying on questionable data in your food fraud assessments? Take control — don’t just trust the numbers. Discover why understanding the origin and characteristics of your data is a crucial step. This blog post is a flashback review of our 2012 scholarly journal article.
This post reviews the academic article, A “Review of the Economic Impact of Counterfeiting and Piracy Methodologies and Assessment of Currently Utilized Estimates,” published in the International Journal of Comparative and Applied Criminal Justice (2012). The authors, John W Spink and Zoltán Levente Fejes, conducted what they call “the first universal review of the estimation of the economic impact of counterfeiting and piracy.” (p. 1) This publication is one of the most important research articles on food fraud, particularly in terms of how vulnerability assessments are conducted and how the data is used—or misused.
The article addresses a root problem: “Many estimates were cited without discussion of their derivation or without stating a methodological foundation.” (p. 2) This is a serious issue because food fraud prevention relies on accurate, credible, and contextually relevant data. As more food safety professionals turn to AI-based analysis and automation tools, understanding the strengths and limits of your data becomes even more important.
Artificial Intelligence (AI) offers tremendous value—only if it is applied correctly. This includes carefully selecting data sources and knowing when and how to trust different information inputs. This is directly aligned with the framework we’ve previously introduced: to use AI effectively, professionals must become a “master of the tool,” a “master of the resource,” and a “master of the field.” This article supports that second role – “mastering the resource” — by challenging us to be intentional with our data.
Underlying factors that challenge the estimate of Product Counterfeiting
The article outlines five major barriers that complicate accurate economic impact estimates for counterfeiting. These apply directly to food fraud vulnerability assessments (FFVAs), especially when AI or data analytics are involved.
- Seizure Data and Interdiction Rate: “Seizure data reports are not considered as core resource documents because they only represent what has been caught and not an estimate of the entire counterfeit product market.” (p. 3) Example: AI may amplify a false signal if the only data it analyzes is from high-profile seizures.
- Lack of Historical Data: “The data is often not consistently gathered year to year.” (p. 3) Example: Without continuous data, AI cannot detect true patterns, just noise.
- Data Uncertainty: “The most important insight is that although incomplete data sets are cited repeatedly, the limitations are not addressed or identified.” (p. 3) Example: Feeding inconsistent or partial data into a model can create more confusion than clarity.
- Data Input Uncertainty: “The estimates are based on data from a criminal enterprise or law enforcement actions from many different countries with no standardized methodology.” (p. 3) Example: AI cannot reconcile conflicting or unverified source material across jurisdictions.
- Model Uncertainty: “Another challenge is that due to the evolving nature of the marketplace and the fraudsters, the models are often not a good fit with the situation.” (p. 4) Example: An AI model designed for retail loss may be inappropriate for evaluating upstream supply chain fraud.
Each of these barriers introduces ambiguity that can mislead food fraud professionals, especially when AI is used without critical thinking or context review by a “master of the field.”
INSIGHT – The AI Opportunity: Better Inputs, Better Outputs
This article is an essential reading for food fraud vulnerability assessment practitioners. The authors note, “Few experts have ever created estimates” (p. 5), which should be a wake-up call that the question is so difficult that experts haven’t even tried to attempt this task. There is a need to dig deeper for anyone who assumes existing published statistics are automatically reliable.
This also aligns directly with the AI readiness principle of being a “master of the resource.” If you are using AI tools to assist in conducting a food fraud vulnerability assessment, it is essential to understand the origin of the data, how it was generated, and its associated limitations.
AI models are only as good as the data and information provided. This article clarifies that those inputs—often statistics and estimates from past studies—are riddled with gaps and inconsistencies. That’s why this research is more than academic; it’s a tool to help professionals improve the entire system of food fraud risk management.
The key takeaway is that the absence of reliable data is, in itself, a red flag. Until the data sources are improved or confirmed, we must be cautious not to let automation create a false sense of precision.
Takeaway Points
- Food fraud prevention research is an interdisciplinary subject that leverages a wide range of disciplines, including Social Science and Criminology
- It is crucial to review the nature of the data or information used in vulnerability assessments.
- Expanding your expertise and utilizing AI where possible requires not only being a “master of the resource” (understanding how to use data effectively) but also being a “master of the field” (knowing if the results are logical and applicable).
Reference: This is a summary of key points in our article: Spink, J., & Levente Fejes, Z. (2012). A review of the economic impact of counterfeiting and piracy methodologies and an assessment of currently utilized estimates. International Journal of Comparative and Applied Criminal Justice, 36(4), 249-271, https://doi.org/10.1080/01924036.2012.726320
[Note: Blogpost research, content development, and editing, supported by ChatGPT. Image created using OpenAI’s DALL·E with significant prompt engineering and human refinement.]
Glossary of Key Terms:
Keyword | Definition (Verbatim from Article) | Example |
Seizure Data and Interdiction Rate | “Seizure data reports are not considered as core resource documents because they only represent what has been caught and not an estimate of the entire counterfeit product market.” | Customs seizes 1,000 counterfeit goods at the border, but the total number in circulation could be 10,000 or more—most go undetected. |
Lack of Historical Data | “The data is often not consistently gathered year to year.” | Counterfeiting data collected in 2010 and 2012 but not 2011 or 2013, preventing any valid trend analysis. |
Data Uncertainty | “The most important insight is that although incomplete data sets are cited repeatedly, the limitations are not addressed or identified.” | A report claims 10% of global trade is counterfeit but fails to reveal that the base dataset excluded informal markets and developing countries. |
Data Input Uncertainty | “The estimates are based on data from a criminal enterprise or law enforcement actions from many different countries with no standardized methodology.” | A survey of counterfeit incidents includes unverified claims from different law enforcement agencies using inconsistent definitions of what qualifies as fraud. |
Model Uncertainty | “Another challenge is that due to the evolving nature of the marketplace and the fraudsters, the models are often not a good fit with the situation.” | Applying a model designed for physical piracy of DVDs to estimate the scope of counterfeit pharmaceuticals leads to unrealistic figures. |