Code Plagiarism Detection API
Build plagiarism detection into your applications with our powerful REST API. Detect copied code across 20+ billion sources with industry-leading accuracy.
API Playground
Not ConnectedConnect your API key to test endpoints in real-time
Detailed Plagiarism Analysis
Get comprehensive, line-by-line plagiarism analysis for specific submissions. This endpoint provides detailed match information, source code comparisons, and forensic-level data for investigating potential plagiarism.
Endpoint
🔍 Detailed Analysis Overview
While the overview endpoint gives you summary scores, the detailed results endpoint provides forensic-level analysis of exactly what code matches, where it matches, and how similar it is. This is essential for investigating suspected plagiarism cases.
What You'll Get
- Complete Source Code: Full content of analyzed files with parsed structure
- Peer Matches: Exact line ranges where code matches other submissions
- External Matches: Similarities found in web/database sources
- Match Statistics: Average, minimum, and maximum similarity scores
- Related Files: Source code of matched submissions for comparison
- Match Metadata: Token counts, similarity percentages, and match types
📤 Request Parameters
Specify both the check ID and the specific submission ID you want to analyze in detail.
| Parameter | Type | Required | Description |
|---|---|---|---|
check_id |
Integer | ✅ Yes | ID of the completed check |
submission_id |
Integer | ✅ Yes | ID of specific submission to analyze (from overview response) |
Request Example
curl -X POST \
'https://codequiry.com/api/v1/check/results?check_id=2810&submission_id=14589' \
-H 'Accept: application/json' \
-H 'apikey: YOUR_API_KEY_HERE '
Detailed Response
{
"submission": {
"id": 14589,
"filename": "student3_python_assignment",
"status_id": 4,
"created_at": "2024-01-15 14:35:22",
"updated_at": "2024-01-15 14:45:18",
"result1": "12.50",
"result2": "8.75",
"result3": "15.20",
"total_result": "85.45",
"submissionfiles": [
{
"id": 128042,
"submission_id": 14589,
"filedir": "student3_assignment/calculator.py",
"content": "def calculate_factorial(n):\n result = 1\n for i in range(1, n + 1):\n result *= i\n return result\n\nprint(calculate_factorial(10))",
"created_at": null,
"updated_at": null,
"language_id": 14
}
]
},
"avg": 32.43,
"max": "85.45",
"min": "12.50",
"peer_matches": [
{
"id": 616853,
"submission_id": 14589,
"submission_id_matched": 14587,
"similarity": "85.45",
"matched_similarity": "78.30",
"file": "student3_assignment/calculator.py",
"file_matched": "student1_assignment/main.py",
"line_start": 1,
"line_end": 5,
"tokens": 12,
"created_at": null,
"updated_at": null,
"line_matched_start": 3,
"line_matched_end": 7,
"match_type": 1
}
],
"other_matches": [
{
"id": 789123,
"submission_id": 14589,
"source_type": "web",
"source_url": "https://stackoverflow.com/questions/12345",
"similarity": "67.80",
"file": "student3_assignment/calculator.py",
"line_start": 1,
"line_end": 4,
"tokens": 8,
"match_snippet": "def calculate_factorial(n):\n result = 1\n for i in range(1, n + 1):",
"match_type": 2
}
],
"related_submissions": [
{
"id": 14587,
"filename": "student1_assignment",
"total_result": "78.30"
}
],
"related_files": [
{
"id": 128043,
"submission_id": 14587,
"filedir": "student1_assignment/main.py",
"content": "import math\n\ndef factorial_calc(number):\n result = 1\n for i in range(1, number + 1):\n result *= i\n return result\n\nprint(factorial_calc(10))",
"created_at": null,
"updated_at": null,
"language_id": 14
}
]
}
📊 Response Structure Explained
Main Response Fields
| Field | Type | Description |
|---|---|---|
submission |
Object | Complete submission data including source files |
avg/max/min |
Number | Statistical summary of similarity scores across all matches |
peer_matches |
Array | Line-by-line matches with other submissions in the same check |
other_matches |
Array | Matches found in external databases and web sources |
related_files |
Array | Source code of matched submissions for side-by-side comparison |
Match Object Structure
| Field | Description |
|---|---|
similarity |
Similarity percentage for this specific match |
file |
Path to the file within the submission |
line_start/line_end |
Line range of the matched code section |
tokens |
Number of code tokens that matched |
match_type |
Type of match: 1=peer, 2=web, 3=database |
🎯 Match Types Explained
Similarities between submissions uploaded to the same check.
- Most common for academic assignments
- Indicates potential student collaboration
- Includes submission_id_matched for reference
- Shows exact line ranges in both files
Code found on websites, forums, and online repositories.
- Common sources: Stack Overflow, GitHub
- Includes source_url when available
- Shows match_snippet of found code
- May indicate copied solutions
Similarities with Codequiry's plagiarism database.
- Historical submissions from other institutions
- Previously detected plagiarism patterns
- Academic paper and textbook content
- Anonymous source protection
🔍 Code Comparison Visualization
Use the detailed match data to create side-by-side code comparisons:
Original Submission
student3_assignment/calculator.py (lines 1-5)
def calculate_factorial(n):
result = 1
for i in range(1, n + 1):
result *= i
return result
Matched Submission
student1_assignment/main.py (lines 3-7)
def factorial_calc(number):
result = 1
for i in range(1, number + 1):
result *= i
return result
💻 Implementation Examples
Processing Match Data
JavaScript Processing
// Extract high-similarity matches
const highRiskMatches = data.peer_matches.filter(match =>
parseFloat(match.similarity) > 70
);
// Group matches by type
const matchesByType = {
peer: data.peer_matches,
web: data.other_matches.filter(m => m.source_type === 'web'),
database: data.other_matches.filter(m => m.source_type === 'database')
};
// Calculate average similarity
const avgSimilarity = data.peer_matches.reduce((sum, match) =>
sum + parseFloat(match.similarity), 0
) / data.peer_matches.length;
Python Analysis
import json
def analyze_plagiarism_results(response_data):
# Parse response
submission = response_data['submission']
peer_matches = response_data['peer_matches']
# Identify suspicious patterns
high_similarity = [
match for match in peer_matches
if float(match['similarity']) > 80
]
# Extract matched code segments
for match in high_similarity:
start_line = match['line_start']
end_line = match['line_end']
file_content = submission['submissionfiles'][0]['content']
# Extract specific lines
lines = file_content.split('\n')
matched_code = '\n'.join(lines[start_line-1:end_line])
print(f"Match: {match['similarity']}% similarity")
print(f"Code: {matched_code}")
⚠️ Error Responses
Invalid Submission ID (422 Unprocessable Entity)
{
"error": "Invalid submission_id provided"
}
The submission_id doesn't exist in the specified check
Verify the submission ID from the overview responseResults not available - analysis still processing
Wait for check to complete (status_id = 4)🎯 Investigation Best Practices
Focus Areas
- Matches with >70% similarity
- Large token counts (>20)
- Multiple matches per submission
- Consecutive line ranges
Code Analysis
- Compare variable naming patterns
- Check comment similarities
- Look for unique code structures
- Analyze whitespace patterns
Context Matters
- Consider assignment complexity
- Account for common patterns
- Check template/starter code
- Review submission timing
🚀 Next Steps
After analyzing detailed results, you may want to download files or generate reports:
Download Files
Get original source files and comprehensive reports for offline analysis.
Download API Guide