top of page
Tour-de-Fierce-Research-Banner-No_Copy
Tour de Fierce Research Banner with a purple statue of liberty holding a microphone in the air and a bar graph in the background that looks like a skyline

International Songwriting Competition 2024 AI-Detection Report

Prohibited Use of AI-Music-Generation in the 2024 International Songwriting Competition, the Wrongfully Celebrated "Winners" Who Hope(d) to Get Away With It, and the Organizations's Systemic Failures that Continue to Enable and Reward It

Author: Joseph Stanek

Date: November 27, 2025

This is a real-world investigation published in the immediate wake of the Warner Music Group vs. Suno lawsuit, which has intensified the music industry's scrutiny concerning AI-composed music and the legal enforcement of platform and contest rules.

1. Executive Summary

This report presents the findings of an independent, evidence-based forensic review of publicly released winning song entries from the 2024 International Songwriting Competition (ISC). The investigation was conducted by Joseph Stanek, a professional musician, producer, songwriter, and educator with more than thirty years of specialized training in auditory analysis, music theory, vocal science, and music production workflows.

The goal of this report is twofold:

  1. to defend the integrity of human songwriting in an era of rapidly advancing generative technology, and

  2. to protect honest entrants, industry professionals, and music-education communities from the consequences of improper or inconsistent rule enforcement.

A Critical Truth: Under ISC's 2024 Rules, Any AI Involvement Requires Disqualification

The 2024 ISC rules explicitly state:

Rule #13: “ISC prohibits any song or lyrics written partially or in full by ChatGPT or any other AI-generated content. If ISC identifies any abuse or violation of this policy, the song will be immediately disqualified from the competition, and no refund will be given to the entrant.”

 

This standard is unequivocal. Under this rule, any presence of AI generation, including AI performance, AI vocals, AI generated melodies, AI test-to-music structure, or AI-encoded production artifacts, requires immediate disqualification, no questions asked.

Technical Constraint at the Time of Submission

The competition's final Extended Deadline occurred at midnight CT, November 6th, 2024.
 

At the time, the leading AI generative music platforms (Suno v3.5 and Udio v1.5) were:

  • exclusively text-to-music interfaces;

  • unable to process human-authored sheet music, melodic notation, stems, or DAW sessions;

  • unable to "perform" or render human-authored or -sung musical compositions;

  • fully capable of generating complete, structured songs via audio files that include vocals, melody, instrumentation, and arrangement from text prompts alone or with human-authored lyrical input.

Key Findings of this Investigation

This forensic review identifies the following:

1. Two winning entries — Comedy (2nd Place) and Christian/Gospel (3rd Place) — exhibit unmistakable, professionally verifiable indicators of AI generation and AI performance.

These include:

  • identical, machine-repeated vocal waveforms (a mathematical impossibility for even the most trained human singer),

  • AI-specific timbral traits inconsistent with natural vocal fold production,

  • envelope-shaping patterns characteristic of leading text-to-music engines,

  • metadata anomalies consistent with AI workflows and inconsistent with human-produced audio.

 

2. Independent AI-detection tools corroborate these findings.

IRCAM Amplify — an internationally respected research institution specializing in audio analysis — produced:

  • 95% AI-probability for the Christian/Gospel 3rd-Place-winning song

  • 98% AI-probability for the Comedy 2nd-Place-winning song

(Exact screenshots provided in Section 5: Findings.)

These results do not stand alone, but support and reinforce the primary method of investigation's expert auditory analysis and its conclusions.

3. OSINT investigations reveal atypical public musical footprints for both credited, winning “songwriters.”

  • One individual publicly released over 110 AI-generated tracks on SoundCloud within a single year, openly using consumer text-to-music platforms.

  • The other has no public musical footprint and is currently employed as a government investigator in counter-threat analysis—a background typically incompatible with suddenly producing an award-winning song despite the clear authorship and performance by an artificially intelligent system.

 

While neither fact constitutes violations of competition rules, both contrast sharply with the typical profiles of ISC winners and reinforce the conclusion that the submitted works exhibit authorship by artificial intelligence.

Intent and Scope

This report does not speculate about private conduct, intentions, or motives. It does not accuse individuals of misconduct outside the competition. Its focus is the publicly available winning competition material itself and the ISC rules governing it.

The only questions at issue are:

  • Did any of the winning entries contain verifiable AI-generated components?

  • If so, were they eligible under ISC’s 2024 rules?

  • What, if any, forensic or verification tools were used before proclaiming these songs “winners”?

  • What systemic breakdown allowed these works to pass screening, judging, and final approval without detection?

  • Who is willing to take responsibility for correcting this – publicly, transparently, and quickly?

Systemic Breakdown in Rule Enforcement

For months, the ISC has publicly showcased these winning songs on its website as the best original work of 2024, under an explicit rule that any AI-generated content is grounds for immediate disqualification. Yet songs that carry unmistakable signatures of AI generation, all of which are covered in detail throughout this report, were able to pass through:

  • the initial screening committee,

  • both the semifinalist and finalist selection processes,

  • the final judging by high-profile industry professionals,

  • public announcement of the winners,

  • administration of competition prizes to the winners,

  • and promotion of the winners

...without a single flag being raised.

That is not a minor oversight. It is a structural failure of the contest’s safeguards, its listening standards, and its enforcement of its own rules.

The ISC’s celebrity judges are not just fancy names on a poster; they lend their reputations and authority to the promise that this competition can recognize and reward excellence in the art of human-authored songwriting. When obviously synthetic performances slip through this entire pipeline undetected, it calls into question not only ISC’s internal processes, but also the seriousness with which those reputations are being exercised.

 

This breakdown raises legitimate concerns regarding:

  • enforcement of eligibility requirements,

  • quality and consistency of the judging pipeline,

  • reputational risk to celebrity judges whose names endorse the winners,

  • fairness owed to thousands of entrants who followed the rules in good faith.

Why This Matters

Competitions like ISC set the global standard for what counts as human artistry in songwriting.
If organizations of this stature cannot reliably distinguish between human and machine authorship — especially under their own rules requiring zero AI involvement — the result is:

  • erosion of trust,

  • diminished respect for human musicianship,

  • unfair displacement of legitimate winners,

  • and normalization of AI-authored works as “songwriting.”

 

At this inflection point, integrity is not optional; for the survival of the music industry, integrity is existential.

Purpose of This Investigation and Report

This investigation exists to illuminate:

  • a systemic vulnerability in ISC’s screening processes,

  • a failure to enforce clearly written rules,

  • and a preventable breach of artistic integrity in a competition that markets itself as a global authority on human songwriting.

 

If the music industry’s most respected competitions cannot reliably distinguish between human and synthetic work, they risk normalizing artistic fraud at the exact moment when vigilance is most needed.

2. Disclaimer & Critical Caveat

Purpose of This Disclaimer

This report is a forensic analysis of publicly available audio files, metadata, and open-source, publicly accessible information. It does not rely on any internal ISC materials, private accounts, or privileged access. The purpose of this disclaimer is to define the scope of available evidence, clarify its limitations, and articulate the standards guiding the conclusions presented.


No private accounts, no login-restricted platforms, and no internal ISC materials were accessed at any time.

2.1 Limitations of Publicly Available Evidence & Metadata Reliability for ISC 2024 Winners

1. The Metadata Available is Not Original Submission Metadata

All audio files analyzed in this report were downloaded from ISC’s public website, not from the original submission database. There is clear evidence of internal metadata alteration by ISC before public release.

 

Examples include:

  • Retitled filenames: Original song titles have been replaced by ISC category labels (e.g., “aaa-1st.mp3”, “country-2nd.mp3”), meaning the true, original filenames, which could potentially reveal provenance clues, were not available for assessment.

  • A uniform added “2.13” timestamp in the Artist field: Many files contain an appended “2.13” in their Artist metadata. Although the meaning is unknown, it is reasonable to infer that this may be an internal processing marker related to a possible February 13 internal list finalization (based on the public semifinalist announcement on March 3, 2024). Regardless of meaning, it confirms that ISC modified the metadata, preventing access to the unaltered originals.

 

Conclusion: Any metadata investigation must acknowledge that the data available is second-hand: modified by ISC, incomplete, and not fully representative of the original uploader’s file.

2. Metadata is Easy to Manipulate or Strip Entirely

Contemporary AI-music-generating platforms (Suno, Udio) leave only minimal identifying metadata. Even that minimal metadata is widely known to be easily removable.

 

There are:

  • Reddit threads

  • YouTube tutorials

  • Discord communities

  • Blog posts

 

…devoted specifically to teaching users how to remove AI-identifying metadata fields, including encoder tags, internal version labels, and system-embedded text markers.

 

As a result:

  • Clean metadata is not evidence of human authorship.

  • Dirty metadata is not required to prove AI authorship.

  • Metadata alone can never carry full evidentiary weight.

3. Caveat Regarding the Scope of Reviewed Works

This investigation reviewed 67 audio-based winning entries from the 2024 International Songwriting Competition. While the total number of winners is larger, seven entries were excluded from forensic analysis because they were either lyrics-only or video-only submissions, including the 1st Place Comedy winner, whose decision to submit his song in a video precluded it from this analysis. Video-based works contain different metadata structures and encoding pipelines, and YouTube-hosted files do not provide the same analyzable audio or metadata artifacts as downloadable song files.

These exclusions were made to ensure methodological fairness and consistency across the analyzed dataset. All 67 remaining audio-based winners — including the Grand Prize, People’s Choice, and every category's 1st-, 2nd-, and 3rd-place winner with an accessible audio file — were included in this review.

4. Several Winning Entries Exhibit Suspiciously “Too Clean” Metadata

Many of the winning audio files contain:

  • barren, non-descriptive metadata fields

  • missing title/artist/comment fields

  • generic encoder signatures typical of AI-generated music

  • stripped embedded comments

  • no ISRCs

  • no album artwork

  • no producer, engineer, or copyright metadata

 

While these elements do not inherently violate the ISC's rules, they are also not typical of human-produced work, especially among passionate songwriters entering a major international songwriting competition.

 

While this cleanliness is not itself proof of AI involvement, it is:

  • consistent with deliberate metadata scrubbing, and

  • inconsistent with honest, organic human creative workflows.

 

This aligns with known and well-documented behavior of users who generate music via text prompt, then sanitize metadata to obscure the method of creation.

5. The Number of Confirmed AI Entries is Smaller than the Number of Suspicious Entries

This investigation confidently identifies two entries whose audio signatures confirm AI involvement.
 

However:

  • several additional entries returned AI flags,

  • multiple share identical encoder fingerprints,

  • others exhibit machine-generated spectral patterns, and

  • a subset use AI-like phrase timing and envelope shaping.

Given the limitations of second-hand metadata, this report does not attempt to conclusively categorize the larger group of tracks that exhibit potential AI-related anomalies. Instead, it identifies them as indicators that warrant further scrutiny by ISC itself. Because ISC did not require entrants to submit DAW sessions, stems, draft versions, or any verifiable proof of total human authorship in 2024, only ISC has access to the primary materials necessary to perform a conclusive, authoritative assessment of those entries.

Accordingly, this report strongly recommends that ISC re-evaluate all winning entries from 2024 and adopt verification procedures commensurate with the standards of transparency and integrity expected from an organization that positions itself as the industry’s premier songwriting competition. Importantly, this recommendation is not satisfied by ISC’s newly implemented 2025 rules; future guidelines do not retroactively protect the 2024 entrants whose work was judged under a categorical AI prohibition. Nor would a closed, unverified internal review be sufficient. Meaningful accountability requires demonstrable, evidence-based verification. Additional recommendations for structural reforms, future safeguards, and competition-wide transparency are outlined in Section 6.

2.2 Primary Evidence Standard of This Report

The most reliable tool in this investigation is expert auditory judgment, a byproduct of more than 30 years of training and professional work as a vocalist, producer, music director, and music educator across live concert and staged musical performances, live- and studio album recordings, and internationally broadcast televised concert productions.

All metadata and OSINT findings are secondary. They serve to support, contextualize, or corroborate the conclusions derived from expert listening.

2.3 Access Limitations (Legal Clarity)

The author did not have access to:

  • the original audio submissions

  • the entrants’ stems, DAW files, or drafts (if applicable)

  • AI music platform accounts (if applicable)

  • internal ISC judging notes

  • proprietary listening committee materials

 

Therefore, this report is:

  • not a legal accusation,

  • not a definitive statement of guilt,

  • not an allegation of intentional wrongdoing by individuals.

 

It is provided as a reasoned, evidence-based assessment rooted in publicly accessible materials.

2.4 Not an Attack on Individuals

This report explicitly avoids assigning motive or intent to any entrant. It is entirely possible for a songwriter to:

  • misunderstand AI-assisted tools

  • believe AI output is “just another plugin”

  • be unaware of contest restrictions

  • or overestimate the extent of their own authorship.

 

This investigation evaluates audio and metadata, not personal integrity.

2.5 Inclusion of Disclaimer

This section:

  • protects the author from claims of overreach or misrepresentation

  • acknowledges the limitations of second-hand metadata

  • clarifies the evidentiary hierarchy

  • underscores how ISC’s own rules determine ineligibility

  • emphasizes transparency and methodological honesty

  • sets the foundation for all subsequent analysis

2.6 Final Caveat

This report cannot account for a multitude of specific (prohibited) scenarios in which, for instance, an AI system generates a compositionthe entrant transcribes it by ear onto digital or physical sheet music, performs it live and/or records it in a studioand submits the newly recorded human track as their own creation.

However, the synthetically identical waveforms, AI timbral characteristics, and text-to-music encoder fingerprints found in the files publicly published by ISC on their own website prove that the two flagged entries are not live human re-recordings. They are original AI-generated audio outputs, not human performances. Thus, this caveat does not apply in this case.

3. Investigative Approach: Methodologies Utilized

This investigation employed a multi-layered forensic process reflecting professional audio-analysis standards, ethical constraints, and the realities of evaluating publicly released ISC winner files rather than original entrant submissions. The methodologies were deliberately sequenced so that expert auditory analysis formed the evidentiary foundation, with digital forensics, metadata review, waveform visual analyses, replicability tests, independent third-party verification methods, OSINT research, and archival rule verification serving as supporting layers of scrutiny.

The procedures below do not present or imply findings. They describe the investigative tools and standards used to evaluate whether any publicly available information concerning the 2024 ISC winning songs provides characteristics inconsistent with human authorship or compliant creative workflows under ISC’s rules.

3.1 Expert Auditory Analysis (Primary Method)

Expert auditory evaluation serves as the cornerstone of this investigation and remains the most reliable method for assessing the authenticity of musical audio in the absence of stems, multitrack recordings, or direct access to original submissions files. This method is widely used in professional production, audio engineering, voice science, and forensic musicology.

The analysis draws on more than 30 years of training and continuous professional experience as a vocalist, producer of live concerts and staged productions, producer of studio and live albums, music director, music educator, creative director for nationally televised music specials, composer, arranger, orchestrator, and vocal pedagogue. This breadth of expertise provides the trained auditory acuity required to detect non-organic timing behaviors, atypical harmonic responses, non-physiological vocal patterns, and other indicators that fall outside human vocal or instrumental performance norms.

 

Each ISC-winning track was subjected to repeated critical listening sessions in multiple environments, including calibrated studio monitors, reference headphones, and consumer playback systems. Evaluation criteria included micro-timing variability, natural breath behavior, phrasing integrity, harmonic realism, dynamic shaping, timbral consistency, and mix characteristics.

 

Expert auditory analysis establishes the initial classification for each track. All other methodological tools serve to corroborate, contextualize, or challenge these primary observations.

3.2 Targeted Forensic Review (Secondary Method)

This investigation did not attempt to evaluate all 67 ISC-winning recordings at the same level of forensic depth. Instead, it used a targeted methodology: only tracks that exhibited clear preliminary anomalies during expert auditory screening were selected for additional digital, metadata, visual waveform, and OSINT evaluation.

This is not a limitation—it is a deliberate, professionally grounded approach consistent with investigative standards used in audio forensics, plagiarism analysis, scientific fraud review, and competitive rule-compliance audits. In these fields, comprehensive catalog-wide inspection is neither required nor recommended. Investigators focus their efforts on material that meets defined thresholds of concern.

 

A targeted approach provides several methodological advantages:

  • It ensures proportionality: only recordings that warrant forensic scrutiny receive deeper analysis.

  • It minimizes investigative bias: a full-catalog audit can appear as if the investigator intended to “find as many violations as possible,” regardless of evidence. Targeted review maintains neutrality.

  • It reflects realistic ethical standards: without stems, DAW files, or internal submission materials, expert auditory analysis is the most appropriate and reliable first-stage filter for identifying songs that may require closer evaluation.

  • It aligns with ISC’s own rules: because any amount of AI-generated material (even 1%) constitutes immediate disqualification under Rule 13, identifying even a single violation is sufficient to demonstrate a systemic vulnerability in the competition’s review pipeline. A catalog-wide audit is therefore unnecessary for this report to establish policy relevance.

This targeted forensic scope follows the accepted investigative model of:

  1. expert auditory analysis;

  2. focused secondary analysis on flagged tracks;

  3. cross-method validation;

  4. contextualization; and

  5. rule-eligibility assessment.

This preserves rigor while maintaining ethical and methodological integrity.

3.3 Metadata Forensic Review

Metadata forensics were conducted on each of the publicly accessible ISC winner audio files using a consolidated spreadsheet that catalogs encoder data, version identifiers, sample rates, bitrate patterns, ID3 structures, timestamp formats, artwork fields, and the presence or absence of standard descriptive metadata.

While metadata alone cannot confirm AI involvement, it is a critical tool for identifying shared processing pipelines, scrubbed or overwritten authorship fields, unusual encoder uniformity across unrelated entrants, and inconsistencies with standard DAW-rendered music files. Metadata review was used strictly as supporting evidence and is interpreted within the known limitation that ISC-hosted versions are not the original entrant submissions and may have been altered by the competition’s upload and distribution processes.

3.4 Spectrogram and Waveform Analysis

Spectrograms, phase plots, and waveform displays were reviewed to obtain a broad visual context for the recordings. These tools were used strictly to observe general patterns such as repeated structural sections, amplitude contours, and overall continuity between duplicated musical passages.

Because spectrogram interpretation requires specialized training, no conclusions in this report rely on detailed spectral readings or technical frequency-domain interpretation. Instead, these visualizations served as supplemental reference material, helpful for confirming observations already made through auditory review, metadata analysis, and structural comparison.

 

In particular, spectrograms and waveform views were used to:

  • verify the presence of structurally identical repeated sections,

  • observe continuity or abrupt transitions in the musical arrangement, and

  • contextualize auditory anomalies such as sudden texture changes or identical phrasing across supposedly separate vocal takes.

These tools supported, but did not drive, the analytical findings presented in Section 5.

3.5 Time-Stretch Artifact Amplification (DAW-Based Vocal Isolation Review)

To further evaluate vocal authenticity, the investigation incorporated a supplemental procedure involving vocal-line extraction and controlled time-stretch analysis.


Publicly available ISC-posted audio files were processed through standard vocal-isolation tools (not original stems) and imported into a digital audio workstation (GarageBand) for time-domain review.

Although isolated vocals derived from full mixes are inherently imperfect representations, these remain sufficiently accurate for identifying broad synthetic artifact behavior.

 

When the isolated vocal line is slowed significantly—beyond natural performance tempo—human voices typically exhibit irregularities in breath noise, micro-timing, transitional mechanics, and formant adjustments.


By contrast, algorithmically generated or heavily synthesized vocals often produce metallic tearing, formant smearing, harmonic collapse, and other non-organic behaviors that become more pronounced as the tempo is reduced.

This method was not used as a primary form of evidence; rather, it served as a corroborative tool to magnify and clarify auditory anomalies initially detected through expert listening. Its purpose is supportive: to determine whether slowed time-stretch behavior is consistent with human vocal biomechanics or reflects characteristics commonly associated with algorithmic generation by artificial intelligence.

3.6 Text-to-Music Replicability Testing, Feasibility Assessment (Suno v3.5, Submission-Window Controls)

To evaluate the technological capabilities available to entrants during the 2024 ISC submission window, the investigation included a controlled text-to-music generation test using Suno v3.5, the latest available software version employed by one of the most popular AI music generators at the time.

 

The purpose of this procedure was not to determine authorship, but to assess:

  • the generative capabilities of the model,

  • the level of musical completeness it can produce from text prompts,

  • and the ease with which a user could create fully AI-generated songs during the relevant timeframe.

 

The test used publicly viewable lyrics from each flagged ISC-winning entry, along with a neutral, descriptive genre prompt. No melody input, reference audio, or production guidance was provided.

This method served as a supplemental feasibility assessment, intended to observe the outputs that Suno v3.5 was capable of producing from text-only input during the exact period in which entrants created their submissions.
 

The procedure is included in this report strictly as a contextual tool and not as a determinant of authorship or rule violation.

3.7 IRCAM Amplify AI-Detection Analysis

To introduce an additional layer of external verification, songs were analyzed using IRCAM Amplify’s AI Music Detector, a state-of-the-art classifier developed by France’s Institut de Recherche et Coordination Acoustique/Musique (IRCAM), one of the world’s leading research institutions in acoustics, audio analysis, and machine-learning detection.

IRCAM’s detector evaluates incoming audio for statistical similarities to known generative-model fingerprints. The system identifies:

  • timbral clustering consistent with neural-vocal synthesis,

  • pitch-instability patterns and micro-timing deviations typical of model-generated stems,

  • spectral and transient signatures associated with Suno, Udio, Stable Audio, and other contemporary text-to-music models.

 

IRCAM Amplify’s detector returns a probability score representing the likelihood that a track was produced using a known AI model. These results were not used in isolation, but rather as an independent benchmark to compare to the auditory, metadata, and replicability findings presented elsewhere in this report.

3.8 Open-Source Intelligence (OSINT) Procedures

Publicly available digital footprints were reviewed to contextualize the creative history, stylistic output, and musical backgrounds of credited songwriters or collaborators associated with publicly released winning tracks. OSINT sources included music platform profiles, release catalogs, social-media histories, artist statements, and public professional records.

OSINT analysis does not assess intent and does not allege wrongdoing. It provides environmental context to help determine whether a submission's stylistic, technical, or production characteristics align with the creator’s known creative history or whether the work warrants elevated scrutiny.

3.9 Archival Rule Verification

Because the purpose of this report is to evaluate compliance with the rules that governed the 2024 competition, all relevant ISC rules were verified using independently archived versions of the ISC Rules and FAQ pages preserved via the Internet Archive’s Wayback Machine. These verified that all entrants were bound by the April 20, 2024 ruleset throughout the entire submission window and that the 2025 rule updates do not apply retroactively.

Rule verification ensures that all methodological choices in this investigation directly correspond to standards the entrants were required to meet—including prohibition of any AI-generated content.

3.10 Methodological Integration Model

The investigation relied on a layered evidentiary model in which:

  • Expert auditory analysis establishes the primary assessment

  • Metadata findings provide workflow and encoder context

  • Spectrograms and audio waveforms visually confirm or contradict auditory observations

  • OSINT research contextualizes publicly available authorship footprints

  • Archival rule validation establishes relevance to ISC’s governing criteria

This integration model minimizes bias, avoids over-reliance on any single tool, and ensures that all conclusions are grounded in established professional standards of audio analysis.

3.11 Limitations of this Methodology

All analyses were performed on publicly published ISC-hosted files, not on original entrant submissions, stems, DAW sessions, or internal judging materials. The methodology therefore incorporates explicit limitations: metadata may have been altered by ISC’s file management systems, digital footprints may be incomplete or altered, and certain conclusions cannot be drawn without access to private submission materials. The report acknowledges these constraints to maintain transparency and accuracy.

4. Official Rules Governing the 2024 International Songwriting Competition

4.1 Primary Source Evidence of ISC's AI Prohibition in Effect Before 2024 Submission Window

To verify the rules that governed the 2024 International Songwriting Competition (ISC), an independent archival review was conducted using the Internet Archive’s Wayback Machine. The earliest uncontested snapshot of the ISC rules page prior to the opening of the submission window (July 2, 2024) is dated April 20, 2024 and is preserved here:
https://web.archive.org/web/20240420223030/https://www.songwritingcompetition.com/rules 

(Accessed 22 November 2025)

This April 20, 2024, archival capture predates any submission period, including Early Bird, Regular, and Extended deadlines, making it the definitive source for the rules that governed all entrants in the 2024 competition.

FIGURE 1. ISC Rules Page: April 20, 2024 (Before Submissions Opened)

This April 20 archival capture predates any submission period — including Early Bird, Regular, and Extended deadlines — making it the definitive source for the rules that governed all entrants in the 2024 ISC competition.

Figure 1: Archived version of the 2024 ISC Rules page, captured on April 20, 2024 (prior to the Early Submissions period which began on July 2, 2024). This archive provides the governing rules to which all 2024 entrants agreed upon submission. Site accessed 22 November 2025. Visit link to expand.

This November 6, 2024, archival capture confirms that the ISC's governing rules remained the same throughout the entire submission period from Early Bird, Regular, and Extended Deadline submission periods.

FIGURE 2. ISC Rules Page: November 6, 2024 (Deadline for All 2024 Entries)

ISC 2024 Rules Screen Capture 11.06.2024

Figure 2: Archived version of the 2024 ISC Rules page, captured on November 6, 2024 (deadline for Extended Deadline). This archive confirms that the ISC's governing rules were unaltered during the submission window for the 2024 competition cycle, to which entrants agreed upon submission. Site accessed 22 November 2025. Visit link to expand.

Table 1 provides each of the three competition rules relative to this investigation, along with a short description pertaining to their relevance and inclusion.

TABLE 1: SUMMARY OF PERTINENT ISC RULES (2024 COMPETITION CYCLE)

Rule Number
Official Wording (2024)
Relevance to This Investigation
Rule #1: Originality & Rights
“All entries submitted must be original songs and shall not infringe any copyrights or any other rights of any third parties."
AI-generated music cannot legally or creatively qualify as “original” under U.S. copyright law or ISC criteria. If melody, composition, or production is AI-generated, the entrant is not the author.
Rule #6: Judging Criteria
“Songs are judged equally on melody, composition, originality, and lyrics… Entrant agrees to accept the final decision of ISC and its judges.”
If the melody, composition, or originality is produced by AI, then the entrant did not create the judged elements — meaning the entry is fundamentally invalid.
Rule #13: AI Involvement
“ISC prohibits any song or lyrics written partially or in full by ChatGPT or any other AI-generated content. If ISC identifies any abuse or violation of this policy, the song will be immediately disqualified and removed from the competition.”
This is the decisive rule. Any AI involvement, even 1%, results in automatic disqualification. No thresholds. No ambiguity.

4.2 Verification of Submission Deadline

The Wayback Machine also preserves the corresponding 2024 FAQ page, confirming the Early Bird, Regular, and Extended Deadline dates:

Early Bird: July 2, 2024

Regular: September 18, 2024

Extended Deadline: November 6, 2024

FIGURE 3. Archived 2024 FAQ Page

Screenshot of the ISC FAQ displaying the extended submission deadline of November 6, 2024.

Figure 3: Archived ISC FAQ page captured November 1, 2024, confirming the submission period of July 2, 2024, through November 6, 2024. This window of time establishes the technological capabilities available to entrants at the time of submission. Site accessed 22 November 2025. Visit link to expand.

4.3 Entrant Consent & Binding Agreement

All entrants were required to review and agree to the rules as a condition of entry. ISC’s own submission portal states that payment constitutes formal agreement to be bound by the rules and regulations.


This is stated directly within the Rules page:

Rule 17: “By entering ISC, Entrant agrees to be bound by all terms of these Official ISC Rules and Regulations.”

This is essential because it establishes:

  • Entrants agreed that any AI-generated content, even partially, is grounds for immediate disqualification (per Rule 13, captured in Figure 1 and Figure 2).

  • Entrants agreed that songs would be judged on melody, composition, originality, and lyrics (per Rule 6, captured in the above 6 November 2024 archive).

  • Entrants agreed that performance quality is not judged, but ownership and authorship are central (which underscores that AI-performed vocals cannot qualify).

This establishes the foundational legal and ethical framework under which all 2024 entries must be evaluated and held accountable.

4.4 Clarifying the Applicability of These Rules

These are the rules all entrants were bound by, without exception.

  • Even if an entrant submitted early in the cycle, they were bound to the rules active throughout the competition unless ISC published a rule change.

  • There is no evidence of any revised or updated rule page during the 2024 submission window.

  • The Wayback Machine shows no alternate rule versions during the entire 2024 submission period (early submission [July 2, 2024] through the extended deadline [November 6, 2024]).

The 2025 updated rules do not apply to the 2024 competition.

  • Following the 2024 competition cycle, ISC significantly revised its AI rules for the 2025 competition cycle, including language allowing ISC to modify their rules concerning AI at any time.

  • These revisions cannot be applied retroactively to the 2024 competition. (However, they are addressed in this report in Section 6: Ethical Implications and Procedural Recommendations as well as Section 7: Author's Statement.) The foundation of this investigation remains firmly in adherence to ISC's unambiguous rule disqualifying any submissions generated "partially or in full" by AI content of any kind without thresholds, "majority authorship" clauses, percentages, or exceptions.

4.5 Section 4 Notes

  1. Screenshots are included to ensure preservation and transparency, as competition websites routinely update or overwrite their rules annually.

  2. The Wayback Machine is an established digital archive used in scholarly writing and research, journalism, and legal e-discovery. It preserves immutable snapshots of websites at specific points in time, which prevents retroactive alteration of the evidence. These captures serve as authoritative documentation of the rules in effect during the submission period.

5. Findings

Following expert auditory evaluation of all sixty-seven publicly available 2024 ISC winning song file submissions covered in this report, two recordings, Comedy: Second Place and Gospel/Christian: Third Place, exhibited unmistakable markers of AI-generation and performance that warranted full-spectrum forensic review under the targeted methodologies described in Section 3.
 

While several additional winning songs displayed indications of anomalies consistent with prohibited competition practices, they were neither as pronounced nor as diagnostic as the two escalated entries.

 

Under ISC’s own rules, the presence of even a single element of AI involvement among the published winners constitutes a systemic screening vulnerability. Accordingly, these two escalated tracks serve as case studies for assessing the reliability and sufficiency of ISC’s review procedures.

 

The findings below present the multi-method forensic analysis conducted on the two winning songs that met the escalation threshold. Each subsection corresponds directly to the methodologies employed: auditory analysis, metadata review, spectrogram and waveform analysis, time-domain artifact amplification, text-to-music replicability testing, third-party testing through IRCAM Amplify, and OSINT research.

5.1 Audio Forensic Findings (Expert Auditory Analysis)

Comedy – Second Place

Expert auditory evaluation immediately and routinely identified vocal and performance characteristics inconsistent with natural vocal cord production and human authorship. These included unnatural metallic disturbances in the vocal signal, anomalously uniform vowel formation across register transitions and extremes, and several instances in which expected breath transitions were absent during passages that physiologically require inhalation. The lead vocal exhibited stagnant, overly synthetic pitch stability and transient precision, with uniformity across micro-timing that is beyond human capability.

 

In addition to these vocal anomalies, the track contains a significant structural and lyrical malfunction at approximately 1:49, where the sung line diverges abruptly from the written lyrics supplied by the entrant. At this moment, two unrelated syllables sound truncated and fused together, creating a non-lexical, unintelligible fragment that does not correspond to any coherent word or phrase. This artifact does not resemble intentional vocal styling, ad-libbing, improvisation, or comedic delivery; instead, it aligns with known failure modes in AI-generated vocals when multiple partial generations are joined or when a model’s phoneme alignment collapses under transitional stress.

Structurally, this error occurs at the end of the bridge — a location traditionally used by human songwriters to explore musical contrast, a harmonic lift, or new lyrical insight leading into the final presentation of the song's chorus. Instead, the track abruptly inserts a truncated fragment of the chorus’s final line, which is close to—but not identical to—the song’s title. Notably, the exact title of the song never appears verbatim in any lyrical position, while multiple near-variants recur throughout the work. This pattern of approximate repetition without stable lexical anchoring is a well-documented characteristic of text-to-music systems, which often generate internally inconsistent lyrical motifs unless constrained with explicit human editing.

Between 2:32 and 3:18, the track enters an extended instrumental break—functionally an outro, as the vocal never returns. This is a structurally unusual choice for a competition submission in the Comedy category, where the listener might typically expect to find the final and most impactful punchline or comedic payoff to occur. The outro begins with a fiddle line comprised of rapid sixteenth-note repetitions that mirror the preceding musical texture. This approach is itself narratively incongruous: a relentlessly fast, ornamental instrumental line does not align naturally with a song whose lyrical premise centers on a man comparing himself to a hibernating bear due to his inability to stay awake.

At 2:48, however, the fiddle is abruptly replaced by an electric guitar performing a musically unrelated figure with a conflicting harmonic center, rhythmic contour, and stylistic identity. The guitar material does not develop, answer, or complement the preceding fiddle motif; instead, it appears to originate from an entirely separate generative idea before vanishing suddenly at 3:08, at which point the fiddle returns to complete the final ten seconds of the outro and thus, too, the song. The effect is jarring, directionless, and compositionally incoherent, leaving listeners perplexed not only by the disjunction between the solos but also by the vocalist’s unexplained disappearance nearly a full minute earlier.

This type of discontinuous, multi-source instrumental stitching is characteristic of text-to-music generative systems, which often assemble long instrumental sections from multiple, independently generated segments rather than from a unified compositional intent. Human songwriters and arrangers do not typically alternate soloists mid-outro with unrelated melodic material—particularly not in a competition context where musical cohesion is expected to be a core evaluative criterion. The placement, duration, and incoherence of this extended outro further reinforce the track’s deviation from standard human songwriting practices and highlight systemic concerns regarding the competition’s screening and judging pipeline.

Gospel – Third Place

Expert auditory evaluation of the Gospel/Christian third-place track revealed immediate and persistent indicators inconsistent with human vocal production. Across the full performance, the lead vocal displays a non-human, metallic timbre that lacks the natural breath shimmer, subglottal resonance, and upper harmonic vitality expected in a lyric baritone — the fach most aligned with the tessitura and range of the work. Instead, the purported “chest register” is dominated by a coarse, mechanical buzz with a frequency profile more akin to an electronic tone generator (alarm-clock–like in its rigidity and overtone shape) than to human vocal-fold phonation.

 

This artificial quality becomes even more pronounced when the voice transitions into what should be the head-register passages of the chorus. Rather than exhibiting the acoustic shifts associated with human registration — changes in formant tuning, laryngeal height, airflow patterns, or vowel reshaping — the timbre remains uniform and invariant, as though a single synthesized sound source were simply pitch-shifted upward without the physical passaggio adjustments required of a human singer. The resulting effect is a tonal discontinuity that listeners interpret as “unnatural,” not stylistic.

 

Throughout the song, repeated phrases exhibit identical harmonic alignment, vibrato rate, vibrato depth, and overtone distribution, with no evidence of the microvariations that occur even in highly trained vocalists. A highly trained human singer cannot reproduce vibrato at the same rate and depth across different pitches, intensities, and expressive contexts without deviation; however, this recording presents a vibrato that functions as a static, model-locked parameter rather than a physiological phenomenon.

Further anomalies include:

  • Fixed formant behavior across register shifts, with no observable vowel modification, timbral shading, or tract resonances changing as the melody rises or falls.

  • Absence of laryngeal or articulatory transitions, producing an unnaturally smooth phonemic delivery.

  • Identical transient behavior across varied consonants, suggesting synthetic phoneme rendering rather than articulatory mechanics.

  • Inability to detect inhalation noise or breath events during musically required phrasing breaks, despite phrases exceeding typical breath-cycle capacities.

Taken together, these features present a vocal line that behaves not as a biological instrument shaped by breath, muscle, emotion, and resonance, but as a synthetic construct governed by algorithmic consistency. The combination of fixed formants, invariant vibrato, registration uniformity, and metallic timbral artifacts met the established criteria for escalation to full-spectrum forensic review.

5.2 Metadata Findings

Metadata extracted from the two escalated recordings was examined against the full dataset of sixty-seven ISC 2024 winning audio files. Although metadata alone cannot confirm authorship, the metadata profiles of the Comedy (Second Place) and Christian/Gospel (Third Place) entries exhibit distinctive patterns that deviate meaningfully from both typical human production workflows and the broader winner pool. These anomalies are consistent with automated rendering, batch-transcoding pipelines, or text-to-music generation systems, and warrant close attention when evaluated alongside the auditory findings and basic visual spectrogram inspection (non-technical).

1. Unusual Encoder Signatures (Major Anomaly)

Both escalated tracks were encoded using Lavf, an FFmpeg-based encoder family. This is atypical for the work of human songwriters, especially work that is submitted to an international songwriting competition, which usually retains a DAW fingerprint from Pro Tools, Logic, Ableton, FL Studio, Reaper, Studio One, or GarageBand. Instead, both flagged songs display encoder strings associated with post-render transcoding rather than direct DAW export.

  • Comedy – Second Place: Encoded with Lavf58.45.100, a version not used by any other ISC winner. This specific build corresponds to older FFmpeg branches and has been publicly documented in discussions of AI-generated music pipelines (including Suno’s earlier v3 workflows).

  • Gospel – Third Place: Encoded with Lavf60.3.100, which appears across nine winning entries in eight different categories. Although theoretically, it is possible for this encoder family to appear in legitimate workflows, its uniformity across unrelated artists, genres, and production circumstances in an international competition is particularly noteworthy. Typically, such a broad pool of independent entrants would never converge on an identical FFmpeg build, only under extremely rare circumstances; the only exceptions being (a) if files were batch-processed post-submission (highly unlikely, given the wide variety of categories), or (b) submissions originated from text-to-music systems that use the same backend rendering tool, Lavf60.3.100, another encoder publicly documented to be associated with Suno's AI-generated song output.

 

Given the diversity of countries, studios, and software normally represented in a competition of this scale, the recurrence of a single encoder version across unrelated entrants is statistically anomalous. It is consistent with centralized processing pipelines—either at the entrant level or at the generation level—rather than independent DAW exports.

Public technical discussions (including widely circulated Reddit analyses of AI-generated music output) have associated Lavf60.x.x and Lavf58.x.x signatures with Suno and similar text-to-music platforms, particularly when files are downloaded directly from the generator without additional mastering or DAW intervention. This contextual information does not assert authorship but situates the observed metadata patterns within known generative-music workflows.

2. Absence of DAW Fingerprints (Strong Indicator)

Most professional DAWs embed recognizable metadata even when users leave fields blank. Such metadata routinely includes:

  • DAW version identifiers

  • LAME tags (Lame 3.100, etc.)

  • Encoding presets

  • Audio engine markers

  • Render origins

 

Neither escalated track contains any DAW-linked identifiers whatsoever. Instead, both present as bare, minimally populated files with:

  • no composer or publisher metadata

  • no genre tags

  • no ISRC

  • no artwork fields

  • no comments or internal project data

  • no embedded timing or project-stem markers

This level of stripping is consistent with FFmpeg-based rendering pipelines or automated platform exports. It is not typical of human-produced competition submissions, particularly when produced in professional or semi-professional environments.

3. Sub-Optimal or Nonstandard Bitrates

Competition entries are commonly submitted at the highest feasible MP3 bitrate (320 kbps) or as WAV/AIFF files to preserve fidelity. The majority of ISC 2024 winners follow this pattern.

The escalated entries do not.

  • Comedy – Second Place: 192 kbps, among the lowest bitrates in the entire winner pool.

  • Gospel – Third Place: 235 kbps, a nonstandard bitrate appearing nowhere else in the dataset.

 

Lower bitrates are a known characteristic of direct text-to-music platform exports, which frequently render files at middle-range bitrates optimized for streaming previews rather than competition-grade final song submissions.

4. Sample Rate Inconsistency (Supporting Finding)

Although 44.1 kHz and 48 kHz are both standard sample rates, the flagged entries differ from each other despite sharing identical stripping patterns and similar FFmpeg rendering markers. This inconsistency is not itself evidence of AI involvement but fits the broader pattern of non-DAW, multi-model rendering, where the sample rate is determined by the export function of the engine rather than producer choice.

Combined with the other anomalies—especially the uniformity of missing metadata and FFmpeg encoding—the samplerate discrepancy reinforces the interpretation that these files did not originate from a cohesive human-controlled production workflow.

5. Structural Metadata Uniformity (Supporting Finding)

Both escalated entries exhibit metadata rows that follow almost identical structural ordering, omit the same ID3 fields, and present the same “cleaned” exterior—a pattern shared by the nine FFmpeg-encoded tracks but not by the DAW-encoded winners. This suggests:

  • Batch processing, or

  • Automated export pipelines, or

  • Platform-generated files using shared render architecture

Independent human songwriters do not typically produce indistinguishable metadata footprints unless they are using the same DAW, same tools, and same encoding chain. The diversity of the remaining winner pool argues strongly against this.

5.3 Consistency Check Using Spectrogram & Waveform Views (Non-Diagnostic Support)

Spectrogram and waveform visualizations were optionally reviewed to check for internal consistency within each escalated recording. These views were not used as a standalone diagnostic tool, and no claims of expert spectrogram interpretation are made. Instead, the purpose of this step was limited to verifying whether repeated musical sections displayed identical structural patterns — something verifiable without specialized audio-forensics training.

 

In both the Comedy (Second Place) and Gospel/Christian (Third Place) recordings, isolated choruses were imported into a DAW and aligned sample-to-sample. In every case, repeated choruses rendered visually identical waveforms, with no variation in transients, vowel shapes, phrasing, plosive timing or energy, or micro-timing. Human vocalists, even under controlled studio conditions, do not reproduce entire multi-bar vocal performances with 100% identical waveforms; generative systems, however, frequently do.

 

This limited, non-interpretive use of spectrogram and waveform views was therefore consistent with the artifacts identified using primary forensic methods (metadata, time-stretch review, text-to-music replicability, and OSINT). These observations support but do not replace the conclusions drawn from the primary methodology.

Comedy – Second Place

A non-technical visual scan of Comedy-2nd's chorus sections revealed nearly identical waveforms across the song's two chorus repetitions. When the isolated vocal stems were aligned and played simultaneously, the contours matched with near-sample-level precision; any minor, inaudible differences can be attributed to inconsistencies in the vocal isolating software, which was a necessary step without access to the source material. This audibly identical waveform used across two sections of the song is far tighter than typical human multitracking, where micro-timing variance is unavoidable even with professional editing.

A screenshot (Figure 4) is included for transparency, showing the overlaid chorus waveforms in GarageBand.

This visual alignment was not used diagnostically on its own, but it was consistent with the auditory finding that the choruses had identical performances rather than natural re-takes.

FIGURE 4. Identical Repeated Choruses in Comedy (Second Place Entry)

Figure 4 shows identical repeated choruses in the comedy-2nd place song file

Figure 4. Identical Repeated Choruses in Comedy (Second Place Entry)

Waveform alignment of two independent chorus occurrences from the Comedy (Second Place) recording. When isolated and placed on separate tracks in a DAW, both choruses render sample-accurate identical waveforms, including identical transients and micro-timing. Such perfect duplication is physiologically impossible for live human vocal performance, but it is a known characteristic of text-to-music generative systems that reuse a single synthesized vocal stem across multiple song sections.

Gospel – Third Place

Four separate chorus iterations were extracted from the full mix and aligned. All four displayed the same timing, amplitude envelopes, and inflection contours, despite appearing in different structural positions within the song.

Again, a screenshot (Figure 5) is provided solely for transparency.
 

These repeated patterns do not constitute formal spectral analysis but visually support the auditory finding that the vocal line was produced once and mechanically repeated — a hallmark of text-to-music output.

FIGURE 5. Identical Repeated Choruses in Gospel / Christian (Third Place Entry)

Figure 5 shows four identical repeated choruses in gospel/christian (third place entry)

Figure 5. Identical Repeated Choruses in Gospel/Christian (Third Place Entry)

Four isolated chorus repetitions display identical waveform structures across all instances. As with the Comedy entry, this behavior is inconsistent with human vocal production and aligns with duplicated synthetic vocal stems generated by AI text-to-music tools.

5.4 Time-Stretch Artifact Amplification Findings (DAW Review)

To further clarify the nature of the vocal lines, each track's isolated vocal (derived from full-mix vocal isolation tools) was imported into a digital audio workstation for controlled time-stretch review. When slowed significantly, beyond natural performance tempo, human vocals, even those that have undergone extreme amounts of pitch correction, reveal unpolished, very "human" irregularities in breath noise, articulatory transitions, formant shifts, and micro-timing inconsistencies.

Both flagged songs produced artifacts during slowdown that were inconsistent with human vocal biomechanics, including metallic tearing, abrupt harmonic collapse, smeared formants, and machine-like distortions that intensified at lower playback tempos. These characteristics are expectations with AI-generated, algorithmic vocal lines undergoing time-domain manipulation.

Although isolated vocals derived from full mixes are inherently imperfect, the artifacts observed were sufficiently pronounced to provide corroborative evidence supporting the auditory and spectral findings.

5.5 Text-to-Music Replicability Test Findings (Suno v3.5)

A controlled replicability test was conducted using Suno v3.5, the best model available to entrants during the 2024 submission window. Using only the publicly viewable lyrics and a neutral genre descriptor—without providing melody, reference audio, or production instructions—the model generated complete musical tracks that exhibited:

  • similar melodic contour and phrasing patterns,

  • nearly identical chord progressions relative to the respective key signatures,

  • similarly structured tonality and identical lack of any modulation anywhere

  • vocal timbres closely resembling those in the corresponding ISC-flagged entries,

  • comparable genre-appropriate arrangement and instrumentation,

  • and production aesthetics aligned with text-to-music workflows from the relevant period.


While this experiment does not determine authorship, it demonstrates that the sonic characteristics present in the flagged ISC entries are reproducible by an AI model available during the submission period with minimal prompting, zero musical skill, and no engineering expertise. This supports the systemic concern that ISC’s screening pipeline is not equipped to detect AI-generated or AI-assisted submissions under current procedures.

5.6 IRCAM Amplify AI-Detection Findings

Both escalated ISC entries were submitted to IRCAM Amplify’s AI Music Detector for independent analysis. IRCAM flagged significant AI-generation likelihood for each track, including direct involvement with both Suno v3 and Suno v3.5 AI-generative models.

Comedy, 2nd Place

  • IRCAM returned an AI-generation probability of 98%

  • Detected Model (if applicable): Suno v3.5

  • See Figure 6:

FIGURE 6. IRCAM Amplify AI Music Detector Results for Comedy (2nd Place) 2024 ISC-Winning Song

IRCAM Amplify AI Music Detector returned an AI-generation probability of 98% for the Comedy, 2nd Place winner in the 2024 International Songwriting Competition. IRCAM Amplify AI Music Detector also detected Suno v3.5 as the AI model that generated the song.

Figure 6. IRCAM Amplify AI Music Detector Results for Comedy (2nd Place) 2024 ISC-Winning Song

IRCAM Amplify AI Music Detector returned an AI-generation probability of 98% for the Comedy, 2nd Place winner in the 2024 International Songwriting Competition. IRCAM Amplify AI Music Detector also detected Suno v3.5 as the AI model that generated the song.

Gospel/Christian, 3rd Place

  • IRCAM returned an AI-generation probability of 95%

  • Detected Model (if applicable): Suno v3.5

  • See figure 6

FIGURE 7. IRCAM Amplify AI Music Detector Results for Gospel/Christian (3rd Place) 2024 ISC-Winning Song

IRCAM Amplify AI Music Detector returned an AI-generation probability of 95% for the Gospel/Christian, 3rd Place in the 2024 International Songwriting Competition. IRCAM Amplify AI Music Detector also detected Suno v3 as the AI model that generated the song.

Figure 7. IRCAM Amplify AI Music Detector Results for Gospel/Christian (3rd Place) 2024 ISC-Winning Song

IRCAM Amplify AI Music Detector returned an AI-generation probability of 95% for the Gospel/Christian, 3rd Place in the 2024 International Songwriting Competition. IRCAM Amplify AI Music Detector also detected Suno v3 as the AI model that generated the song.

5.7A OSINT Findings: Comedy, 2nd Place

Entrant: "Gregory S." (publicly listed by ISC)

Category: Comedy / 2nd Place

 

OSINT Summary (Public Record Only, No Personal Data Beyond What the Entrant Has Publicly Published)

 

Open-source review of publicly available information about the credited entrant indicates no discoverable background in music production, audio engineering, composition, or vocal performance. The individual’s professional footprint reflects a long career in law enforcement and investigative roles, with current employment in a government security agency. No online discography, artist profile, studio credits, performance history, production portfolio, or prior songwriting catalog could be located.

This absence of a musical or technical production background does not constitute a violation of ISC's rules; however, it is noteworthy when evaluated alongside the substantive forensic findings indicating production characteristics incompatible with typical human-generated music workflows.

 

In contemporary music competitions, even non-professional entrants typically maintain some form of musical footprint—such as social media performance clips, distributor accounts (DistroKid, TuneCore), SoundCloud profiles, YouTube uploads, Bandcamp pages, or collaborative credits. No such footprint was identifiable for this entrant.

Taken together, the OSINT profile is consistent with an individual outside the music production  and songwriting ecosystem and does not provide any independent support for human-directed audio engineering, vocal tracking, or multi-instrumental recording capabilities. When viewed in conjunction with the audio, metadata, and replicability anomalies presented earlier in Section 5, this contextual information highlights additional discrepancies between the credited creator profile and the production characteristics of the submitted recording.

5.7B OSINT Findings: Gospel, 3rd Place

Entrant: “JesseJ” (publicly listed by ISC)
Category: Christian/Gospel / 3rd Place

 

OSINT Summary (Public-Record Only — AI-Relevant Behaviors Only)

Public information for the Christian/Gospel entrant leads to a website branded as “songsbyjesse.com,” where the entrant describes himself as a songwriter and “storyteller” working across multiple genres, including sync, collaborations, commercial work, and Christian music.

 

Notably, OSINT review shows:

1. Extremely high-volume musical output across the entrant’s public SoundCloud account:

  • 110+ tracks posted within ~12 months

  • each track includes fully produced arrangements and unique artwork

  • many genres are represented (Christian, CCM, pop, sync styles, cinematic, etc.)

 

Such high-frequency, multi-genre publishing is rare for independent human composers working alone, particularly without a visible collaborative pipeline, label support, or production team.

This does not prove AI usage, but it strongly aligns with generative-music workflows where tracks can be created and published rapidly at scale.

2. Stylistic dispersion consistent with AI music catalogs

Listening samples and waveform thumbnails reveal:

  • highly polished mixes across many unrelated genres

  • consistent loudness and mastering profiles

  • artwork that appears to be either AI-generated or templated at scale

  • similar structural patterns across compositions regardless of genre

 

These features match output from:

  • Suno

  • Udio

  • Stable Audio

  • other high-throughput text-to-music generative music systems

 

and differ markedly from typical independent songwriter catalogs, which show slower release schedules, genre clustering, and human-performance variability.

3. Awards pattern inconsistent with a traditional songwriting résumé

The entrant’s website lists:

  • Unsigned Only Music Competition Winner award

  • International Songwriting Competition recognition

  • multiple finalist badges

 

Combined with the volume of output, the pattern resembles AI-assisted “competition-submission loops” seen in recent online communities where creators generate large batches of songs through text-to-music systems and submit them to multiple contests.

Again — not proof, but contextually relevant to ISC’s mandate to detect and prevent AI-generated submissions.

4. No verifiable professional music credits

OSINT searches of:

  • PRO databases (ASCAP, BMI, SESAC)

  • Songwriter/producer credits (Discogs, AllMusic)

  • streaming platforms

  • commercial sync libraries

 

return no confirmed songwriting credits under the entrant’s name. This absence stands in contrast to the unusually large and polished body of public output.

Interpretation (Non-accusatory, rule-aligned)

Taken together, the OSINT indicators do not assert that the entrant acted with intent or violated competition rules. However, they do show:

  • a public history of mass-generated music output,

  • stylistic patterns consistent with contemporary text-to-music generators,

  • an award-submission profile typical of AI-assisted creators,

  • and no traditional music-industry footprint.

 

These factors support the larger multi-method finding: that the sonic, metadata, and production anomalies observed in the flagged ISC entries align with workflows and outputs characteristic of AI-generated or AI-assisted music systems, not with human-authored songwriting.

FIGURE 8. Awards Recognition Banner on Entrant's Website ("songsbyjesse.com")

Figure 6 shows awards given to gospel-3rd winnerin various songwriting contests

Figure 8. Awards Recognition Banner on Entrant’s Website (“songsbyjesse.com”)

The high volume of contest submissions and recognitions displayed here is consistent with the OSINT finding that the entrant actively participates in multiple songwriting competitions. Screenshot accessed 24 November 2025.

5.8 Rule Relevance Analysis

Per Rule 13, any amount of AI-generated involvement—melodic, lyrical, vocal, or instrumental—renders an entry immediately ineligible and requires disqualification without refund. The combined auditory, spectral, metadata, replicability, and contextual OSINT findings presented in this section do not assert intent or accuse any entrant of wrongdoing. Instead, they demonstrate consistent, multi-method correlations between the posted audio files and well-documented workflows, artifacts, and sonic characteristics associated with contemporary text-to-music generation systems.
Taken together, these independent layers of evidence corroborate the
conclusions of the primary investigative method—expert auditory analysis—which identified these two recordings as exhibiting characteristics incompatible with human vocal or production norms.

 

Under ISC’s own rules, the presence of even a single non-compliant entry within the publicly announced winners is sufficient to demonstrate a structural vulnerability in the competition’s submission, screening, and judging pipeline. These two escalated case studies therefore highlight specific pressure points that warrant internal review and transparent procedural improvement.

Beyond violating Rule 13, the presence of AI-generated performance elements also creates unavoidable conflicts with two additional ISC requirements. Under Rule #6, all entries must be judged on melody, composition, originality, and lyrics — criteria that presuppose human creative authorship. When core musical elements such as melody, harmonic structure, vocal performance, arrangement, and overall design are generated by a text-to-music system, those evaluative categories no longer meaningfully apply to the entrant. Likewise, Rule #1 requires that entries be original and not infringe on the rights of others; however, AI-generated melodic and instrumental content cannot be claimed as original human authorship and does not constitute a human-created work under the rule’s intended scope. As a result, any piece with AI-generated musical or vocal performance is not merely disqualified under Rule 13 — it is structurally incompatible with the criteria and authorship assumptions embedded in Rules #1 and #6.

6. Ethical Implications & Procedural Recommendations

6.1 Integrated Conclusion with High-Confidence Anomaly Indicators

The findings of this report do more than expose critical markers that are highly suggestive of explicitly prohibited competition behavior; they reveal a fundamental and systemic procedural breakdown inside the International Songwriting Competition (ISC), a rupture that allowed AI-generated songs to be awarded top honors during a year in which the rules explicitly and unambiguously prohibit any AI involvement of any kind.

Beyond the entrants who submitted fraudulent material, this breakdown has immediate and far-reaching ethical consequences for:

  • the legitimate 2024 entrants

  • the integrity of the ISC's judging process

  • the public reputation of ISC

  • the reputations and authoritative statuses of the celebrity judges

  • the broader songwriting community

 

And critically: the 2024 award cycle is still active.
 

These winners are still displayed, still promoted, and still benefiting from honors earned under rules they did not follow.


That makes this not a historical error, but a present-day harm.

6.1 Harm to Legitimate Entrants

By allowing AI-generated material to advance through the winner pipeline, ISC unintentionally:

  • grossly mishandled opportunities that should have gone to human songwriters who abided by the rules

  • denied life-changing placements, visibility, and career momentum

  • invalidated the meaning of a “human-created songwriting competition”

  • created mistrust in the legitimacy of both the ISC's screening and judging processes

Entrants paid real money and created real art under real rules. Those rules were not upheld. This demands retroactive correction, not future promises.

6.1A Clarification Regarding ISC’s “Final Decision” Clause

The 2024 ISC rules contain a provision stating that entrants must accept the organization’s final decisions and may not dispute the outcome. Under normal circumstances, this protects ISC from typical disagreements about judging preferences or subjective artistic interpretation. However, this clause does not apply when the organization itself has violated its own binding rules.

Rule 13 of the 2024 competition explicitly forbids any AI involvement in any portion of the submitted work. Because two of the publicly announced 2024 winners exhibit undeniable amounts of evidence of AI-generated and AI-assisted production, these entries were never eligible to be judged in the first place.

A “final decision” built on an ineligible submission is procedurally void, because:

  1. Organizations cannot enforce entrant compliance with a decision reached by breaking their own rules.

  2. Entrants consented to a competition governed by the published rules — not to a process in which prohibited entries were allowed to advance.

  3. The clause presupposes that ISC fulfills its responsibilities of screening, verification, and rule enforcement.

  4. When governing rules are violated, the protective clause collapses; fairness and rule integrity take precedence.

 

For these reasons, ISC’s “final decision” clause does not shield the organization from addressing this failure, nor does it obligate entrants to accept an outcome that was reached in violation of the published eligibility criteria.

Entrants fulfilled their contractual obligations by submitting original, rule-compliant human-created work; ISC did not fulfill its reciprocal obligation to uphold the rules that governed the competition.

Therefore, legitimate entrants are owed:

  • retroactive correction

  • transparent acknowledgment

  • reinstatement of rightful placements

  • and the restoration of trust in the competition’s integrity

6.2 Ethical Risks to the Celebrity Judges

The ISC’s celebrity judges represent some of the most respected names in music. Allowing AI-generated tracks to pass through a judging panel headlined by Grammy-winning, internationally celebrated musical artists is a reputational failure of the highest magnitude.

By not detecting AI-generated works, ISC has (unintentionally) positioned these artists as endorsing:

  • prohibited works

  • ineligible submissions

  • procedurally flawed selections

 

And the judges themselves have ethical responsibilities:

  • It is the responsibility of every judge, especially at this tier, to ensure that their endorsement reflects the standards of the art form they champion.

  • Failing to recognize and call out anomalies in vocal realism, spectral artifacts, coherence, and musical origin compromises their reputational integrity.

 

The judges were placed in an impossible situation: their names are attached to decisions that directly contradict the rules they agreed to uphold as ISC judges.

For the sake of their own reputations, the judges should insist on a formal correction. Their silence would suggest complicity, and that harms them more than anyone.

 

6.3 Ethical Failure by Entrants (Particularly the Second-Place Comedy "Winner")

One of the escalated "winners" is a former NYPD officer and current investigator for the Office of Homeland Security in the state of New Jersey. This raises extraordinary ethical concerns.

Public safety professionals employed by United States governmental bodies who occupy positions of trust hold a higher standard of conduct. Claiming to write a song and submitting it as your own when there is an overwhelming amount of evidence to the contrary, a scenario where all available evidence makes your entry appear to violate the competition rules—particularly when the rules were clear, strict, and repeatedly affirmed—constitutes:

  • a breach of professional integrity

  • a breach of artistic ethics

  • a violation of the implicit trust that competitions require

  • a direct harm to thousands of working artists across the world who depend on these kinds of opportunities.

 

This is not bending rules at a local talent showcase. This is unfairly gaining placement in an international, career-defining songwriting competition through fraudulent tools that displace the legitimate work of hard-working human creators, and staying silent about for 6 months.

 

This is exactly the type of erosion of artistic integrity that is harming the arts industry today.
People in positions of authority and governmental influence must be held to equal standards, not be excused from them.

Artists have fought for decades to build platforms like ISC that have opportunities that can change their lives. And the fight continues today. Competitions like this exist because of creative integrity, not in spite of it.

 

6.4 Recommendations for Immediate Corrective Action

 

1. Immediate Disqualification of Any Entries Showing Substantial Evidence Toward AI Involvement

Under Rule 13 (2024), ANY AI involvement renders a submission immediately ineligible without refund. No exceptions. The two entries identified as ineligible to compete by this report's expert auditory analysis and corroborated by the ensuing thoroughly executed secondary evidence must be:

  • immediately stripped of their winning titles

  • kept visible on this year's page of outcomes in a new category marked “disqualified – rule violation” 

  • replaced accordingly with the deserving songwriters.

This must occur before the 2025 winners are announced.

 

2. Public Statement and Formal Apology

 

Because the 2024 winners are still public, ISC must issue:

  • a public correction

  • a transparent explanation of the procedural failure

  • an apology to the entire pool of honest, displaced songwriters

  • clarification that celebrity judges were not knowingly endorsing AI-generated works

 

This is the beginning of a restoration of trust both for entrants and judges themselves.

 

3. Reinstatement of Correct Human Winners for 2024

 

Legitimate, rule-abiding songwriters from the 2024 ISC cycle must be publicly declared as winners in the positions they legitimately earned. Anything less is unjust and inconsistent with ISC’s own rulebook.

4. Clarify the 2025 Rule Change and Prevent Public Misinterpretation

 

The shift from “ZERO AI involvement allowed” (2024) to “majority human authorship required” (2025) is a dramatic and ethically consequential policy reversal.

This must be addressed because:

  • It suggests that ISC may have changed the rules after discovering AI slipped through.

  • It raises the appearance of revenue-motivated decision-making over artistic integrity.

  • It abandons the clarity that 2024 entrants relied upon.

  • It opens the door for AI-dominant submissions to win major songwriting titles.

 

ISC must publicly explain:

  • why the rule changed

  • why it changed so drastically

  • why an AI category wasn’t created instead

  • how they will protect human songwriters moving forward

 

5. Establish Verifiable, Audit-Ready AI Compliance Protocols

 

The ISC’s abrupt shift from the 2024 standard of “zero AI involvement permitted” to the 2025 rule of “majority human authorship” introduces unprecedented ambiguity and invites even more systemic abuse than it already has. As written, the 2025 rule is unenforceable, undefinable, and ripe for exploitation, because:

  • ISC provides no definition of “majority authorship.”

  • ISC outlines no verification procedure to distinguish human vs. AI contributions.

  • ISC collects no supporting documentation that could substantiate human creation.

  • ISC performs no pre-screening for AI at submission.

  • ISC appears positioned to benefit financially from increased submissions regardless of authenticity.

 

This creates a compliance environment where:

  • A fully AI-generated track with two lines rewritten by a human could claim “majority authorship.”

  • A human could write 51% of lyrics while outsourcing all vocals, arrangements, and instrumentation to AI.

  • Entrants using AI could win categories specifically intended to reward human creativity, while adhering to the “letter” (but not the spirit) of the rule.

  • AI-dominant submissions could overwhelm categories, pushing genuine songwriters out of the awards pipeline.

 

To prevent widespread misunderstanding, misuse, and unethical practices, ISC must implement clear, strict, externally auditable AI protocols that define, document, and verify majority-human authorship at the time of submission.

Since it's too late to completely change the rules for ISC's 2025 competition cycle, it is my recommendation that they employ the following mandatory safeguards for all 2025 competition entrants regardless of whether they have already submitted and paid, offering refunds to those who wish to pull their names from eligibility (ISC has, after all, newly asserted their right to change their minds at any time about AI given the rapidly changing landscape):

A. Define “Majority Human Authorship” in Measurable Terms

 

Without quantification, the rule is functionally meaningless. The phrase must be rewritten to reflect concrete, enforceable standards, such as:

  • A minimum percentage of lyrical content produced by a human

  • A minimum percentage of melodic composition produced by a human

  • A minimum percentage of instrumental arrangement produced by a human

  • A maximum allowable percentage of AI-generated audio or stems

  • A total prohibition on AI-generated vocals unless in a designated AI category

B. Require a Signed AI Disclosure Form With Every Entry

 

Entrants must attest whether their work includes:

  • AI-generated lyrics

  • AI-generated melodies

  • AI-generated vocals

  • AI-generated arrangements

  • AI-generated stems/instrumentation

  • AI mastering

  • AI-assisted editing or mixing

 

False attestation should result in retroactive disqualification.

 

C. Require Submissions of Source Files and Sessions

 

Should any questions arise about future category winners and the validity of their majority-human-authored winning entries, ISC should already possess the necessary verification materials, which have been scrutinized prior to the announcement of the winners, on hand in case further verification is required. This may include:

  • DAW project files

  • Lyric drafts

  • MIDI session stems

  • Vocal session stems

  • Evidence of human recordings

  • Timestamped version history

  • Handwritten musical notation on manuscript paper

 

Human songwriting produces a physical and digital paper trail. AI-generated music does not.

 

D. Mandatory AI Screening at Submission

 

To prevent AI-dominant tracks from ever reaching the judging stage, ISC must implement automated scanning or individual personal review of all submission materials. The automated process should use:

  • IRCAM Amplify (AI detection)

  • Metadata analysis

  • Encoder signature scrutiny

  • Spectral pattern evaluation (when needed)

E. When this year's submission window is closed, it is my recommendation that the ISC return to a mandated ban on submissions created in-full or in-part by any AI system and add a dedicated AI-Allowed Category. (Or they could add an entirely new AI-Only Songwriting Competition.)

 

Instead of diluting the integrity of all songwriting categories by judging two completely different songwriting processes together, ISC should establish a dedicated AI category for entrants who prefer to use AI in their songwriting workflows, encouraging innovation without compromising the respect of the songwriters who have gotten the competition to where it is today. By keeping AI-authored or -influenced songs separate from the legacy songwriters' songs, ISC will maintain fairness in judging two different processes how they deserve to be judged. This also helps avoid the perception that ISC has “sold out” to protect submission revenue

A competition cannot simultaneously claim to uphold human creativity and allow AI-dominant work to compete against human artists without categorical separation.


AI music is not inherently bad—it simply must not compete with human songwriting when humans are promised a human-only competition.

 

A separate AI category preserves:

  • transparency

  • fairness

  • artistic legitimacy

  • revenue (without compromising rules)

F. Require Judges to Sign AI-Compliance Verification

 

Judges at every stage of the screening and judging process should be required to affirm that:

  • they have actively listened to the entries in their entirety and are equipped with appropriate measures to objectively identify, qualify, and quantify AI influence in the submitted songs;

  • they understand the rules;

  • they would NOT knowingly endorse an ineligible work;

  • they stand by their adjudication scores.

 

This protects their reputation and the competition’s credibility.

 

6.5 Call to Action: Protect Songwriting. Protect Songwriters.

The International Songwriting Competition has long been a pillar of artistic excellence—a place where creators could place their trust in the integrity of the process and the meaning of recognition. The discovery that AI-generated and AI-modified works were awarded top honors in a year where such involvement was explicitly forbidden is more than a clerical issue. It is a breach of trust, a blow to artistic fairness, and a disservice to every songwriter who has given their artistry, their passion, and their money to support the mission of the International Songwriting Competition all of its 23 years of existence.

 

The rightful winners of 2024 must be restored, celebrated, and rewarded as is deserved and well overdue. The competition must publicly correct the record. The judges must demand accountability to protect their reputations. And ISC must reaffirm its commitment to fairness, to its mission, and to human creativity by enforcing its own rules without hesitation.

 

This is not an attack—it is an urgent call to honor the very art form ISC was built to celebrate.
The future of songwriting deserves better.

7. Author's Statement

I conducted this investigation because of a problem I have witnessed repeatedly in my work as a professional musician, educator, and vocal coach: the growing number of people who claim the title of “songwriter” while relying entirely on generative AI systems to do the songwriting work for them. In recent years, I have seen an influx of new students arrive to my studio excited to “share their original songs,” only to reveal that their role in the creative process amounted to typing prompts into Suno. As an artist who has spent more than thirty years training, studying, and practicing the craft of composition, storytelling, vocal technique, and musical interpretation, I found myself troubled by how casually the term “songwriter” was being redefined—quietly, rapidly, and without resistance.

About a year ago, one of my students asked me to help her write and submit a song to the International Songwriting Competition (ISC). That was the first time I had heard about the competition. I agreed, with the full intention of teaching her the true process of songwriting from the ground up, and in record timing, too. But as she began to understand the depth and discipline required to compose and record a song—lyrically, musically, structurally—she became overwhelmed and withdrew her request. Her reaction felt like a manifestation of the very concern that had been growing in me: many people now mistake AI music generation for human songwriting abilities and artistry. And I wanted to demonstrate—to her, to myself, and to my studio—that the hard work of actual human songwriting still matters.

So, one week before the ISC submission deadline, I decided to write a song of my own from scratch to enter into the competition myself—not to win, but to model the real creative process for my students. I wrote a fully original song from scratch in those seven days, documented the process, and submitted it just in time. Then I did what most entrants do: I moved on and forgot about it.

 

A year later, an ISC promotional email landed in my spam folder. Suddenly remembering the week of chaos I had writing the song, I curiously checked the ISC website to listen to the previous year’s winners. What I heard put a pit in my stomach. The second-place Comedy category winner—the very category I had entered—immediately exhibited the unmistakable markers of AI generation. And that realization cut sharply against the entire reason I had entered the competition in the first place: to lead by example and demonstrate the value of real artistic labor.

That moment became the catalyst for this investigation and report.

I did not begin this investigation to expose wrongdoing or to advance my own entry. (I never stood a chance against the true songwriters on that list, like my newest obsession Rett Madison, who scored 1st place in the Americana category with the most amazing song I've heard in a LONG time "One for Jackie, One for Crystal.") I did it because the very thing I sought to teach—integrity in songwriting—had been undermined by the competition itself. My involvement as an entrant is therefore a matter of transparency, not motivation, and I include it here to maintain full honesty with anyone who happens to come across this and gets this far through it. I neither expect nor seek any reconsideration of my own submission. My placement in the competition is irrelevant to the findings, the methodology, and the conclusions of this report.

This investigation exists because I believe that human creativity is worth defending. My students deserve a world where songwriting competitions reward actual songwriters. My colleagues—artists who have devoted their lives to this craft—deserve fairness. And the music community at large deserves competitions that uphold their own rules.

 

My commitment to this process has always been the same: to stand for artistic integrity, to protect the value of human musicianship, and to speak honestly at a time when silence would only further erode trust in the spaces meant to celebrate human creativity.

This report is offered in service of those values.

8. About the Author

Joseph Stanek is a musician, producer, educator, and scholar whose multifaceted career is founded upon 30 years of venerable study in acoustic phenomena, the science of performance, and fiercely authentic self-expression. Through his private the technical backbone for some of pop and Broadway's biggest names. His work with longtime clients, including Kristin Chenoweth, Jennifer Hudson, and Ariana Grande, has contributed to multiple Billboard-recognized releases, highlighted by Chenoweth's The Art of Elegance, which debuted at #1 on the Top Jazz Albums chart and held the spot for eight consecutive weeks.

Stanek's contributions extend globally, having produced the Tabernacle Choir’s internationally broadcast Christmas Concert, reaching more than 66 million households. The recipient of the Pi Kappa Lambda Scholarly Writing Award, he founded Tour de Fierce® to empower artists through authentic expression and technical mastery. His mission is to steward honest, skill-driven performance practices that welcome new opportunities for creative expression through technological and artificial intelligence advances rooted in transparency and innovation.

9. Contact

Joseph Stanek (Seph Stanek)

Producer | Researcher | Educator

Founder & Owner of Tour de Fierce®

New York, NY

​Email: contact@tourdefierce.vip

Website: www.tourdefierce.vip

Full Report:

2024 AI-Detection Report in the International Songwriting Contest
https://www.tourdefierce.vip/research/isc-ai-detection-report-2024​

Portrait of Joseph “Seph” Stanek wearing glasses and a navy jacket, standing in a classical theatre with blurred balconies and warm lighting.

© 2025 Joseph Stanek. All rights reserved.

Portions of this report may be quoted or referenced with proper attribution.
Please link to the official publication page when citing this work.
Reproduction or distribution of the full report requires written permission from the author.

Cite This Report

Recommended Citation:

Stanek, Joseph. International Songwriting Competition 2024 AI-Detection Report. Tour de Fierce Research, 2024.
https://www.tourdefierce.vip/research/isc-ai-detection-report-2024

Recommended Next Read...

Learn how to transform performance anxiety into expressive strength with “Stage Fright to Stage Might,” a concise, easy-to-read information packet rooted in both science and lived experience.

(Free to download. Opens in a new tab.)

bottom of page