Who owns culture?
The age-old question that nobody spoke about until 50 years ago.
In contemporary culture, an arena colored by identity politics, fueled by social media outrage, and undermined by gung-ho commercialism, we care hugely about this question. We are encultured to judge businesses and creators first by where they come from (their heritage), then by where they are heading (their goals), and finally by what they are doing (their product). Any misalignment between cultural product, cultural heritage, and cultural objective, sirens in our ultra-sensitized ape brain start blaring. VETOED. BANNED FROM GAME.
Now, as the frontier world of artificial intelligence (AI) collides with creativity and art, we enter a new playing field. A field where the old rules of the game no longer seem to fit.
When a cultural product is created/designed by an artificial intelligence system, who really “owns” that cultural content? Who is really in control?
When a commercial misstep is taken, a cultural clanger committed, to whom should we direct our vitriol? Man, or machine? Is outrage even a reasonable reaction?
To search for answers, I want to dissect a music industry scandal that erupted in the past few months between an artist and a management company. Between a virtual artist called FN Meka, and their very real management company — Factory New.
It is impossible to discuss this case without venturing briefly into identity politics — specifically race and cultural appropriation between races.
Exactly who created FN Meka, a virtual artist who very clearly borrows stylistically and artistically from black-dominant cultures, is a key point of tension in the debate.
I however want to focus the lens on a different, but equally significant dimension: that of AI design and ethics. And using a toolkit of music psychology, AI design thinking, and ethical philosophy, propose a new way of thinking about, and safeguarding against, AI-generated cultural crimes.
A bit of context
With the artist having recently announced a record label deal with music industry giant Capitol Records, black-led activist industry group Industry Blackout posted a slam message on Instagram, calling out the artist’s management company for gross cultural misappropriation, and demanding that Capitol Records cancel the artist’s contract — which the label proceeded to immediately do.
As always, too little too late. Social media, predictably, blew up. The story lit the fuse of longstanding grievances in the music industry about artist exploitation — due to a secondary revelation in the aftermath that Factory New had not paid or compensated the black artist who wrote and recorded the lyrics and music for FN Meka — and cultural appropriation by big business.
Moreover, the current context of A-list human rap artists Young Thug and Gunna being arrested and tried under RICO charges in response to their extreme lyrics (and related actions), sat very awkwardly indeed with some of FN Meka’s more explicit lyrics and visuals.
Overall, let’s just say the whole affair stinks.
Why are we outraged?
Prior to Industry Blackout’s callout post, exactly how FN Meka came into being, and how they produced their artwork, was not clear to anyone outside the Factory New team.
Now the truth is out, we are deeply uncomfortable, even outraged. With the idea of businessmen exploiting artists, reaping the rewards of their craft with barely even a clap on the back for compensation. By the idea of non-blacks appropriating black culture to serve their own commercial goals.
Like how we rile up at the thought of record execs controlling the artistic output and message of their artists (a narrative component of every music film you ever saw, from Bohemian Rhapsody to Rocket Man to Straight Outta Compton), we aren’t comfortable with the idea of Factory New manually controlling the artistic output, and cultural associations, of the artist. We feel deceived and cheated. Like the product that we desperately want to enjoy is somehow tainted, dirty, and embarrassing. Or at the very least boring in its commercialism.
Having followed FN Meka on social media for almost a year prior, accepting their (to my ears) sub-par artistic output due to my interest in the project and its supposedly new form of synthetic media, I am also personally just disappointed with how boringly manual the production process is. How un-technological. How at odds with the much-lauded “game-changing use of AI.” That two businessmen briefed an artist to create something cool and popular, that he then did so, and that the business then profited from the cultural output. Seriously, how flaccid.
To summarize, from a cultural standpoint, I think we can all agree with and share in Industry Blackout’s outrage. The fact that FN Meka was manually produced by non-black businessmen, using the creative input of a black artist who was unpaid for their work, reeks of artist exploitation and cultural appropriation.
Disappointment aside, I think there are two interesting questions to pose here.
The first question, for which our contemporary sharpened sense of cultural ownership has a clear answer:
Would it have been more “ok” if FN Meka had instead been manually created by black businessmen?
What we care about here is cultural ownership. It’s about who has the right to use the ideas and motifs of a certain cultural heritage and to exploit those cultural elements for commercial gain.
The second question, the question into which we will now dive headfirst, is one we have far less clear an intuition about:
Irrespective of who owns the business, would we be less outraged if FN Meka had actually been created by an AI?
The alternate universe: what if FN Meka were created by an AI?
To pull this apart a little, let’s run a little counterfactual. Let’s say the design and production of FN Meka had been as automated by AI as the founders misleadingly made out. How would this change the way we feel about the ethics behind the project? And could we reasonably expect a different end product?
First, an important point from the psychology of popular music.
We, humans, like to think of ourselves as culturally curious, and appreciative of new cultural and artistic ideas.
However, in reality, humans gravitate towards the familiar, towards musical ideas and references that we have heard before. This is an area I looked at during my music and science studies at Cambridge — why do we like to listen to what we like?
In general, we have a strong preference for familiar music and sounds because we already know how to appreciate them. It’s as if we have a mental map of the song already established in the neural pathways of our brain, meaning that we can confidently sit back and enjoy the song without worrying about getting lost.
Taking in new music can be a very rewarding experience, but it is simply harder. It loads the brain with a complex cognitive challenge that activates multiple, simultaneous neural systems.
So, we, the audience, have a preference for familiarity. But what is the connection between this and our FN Meka counterfactual?
Let me introduce one more key concept to the mix. A simple concept from AI design.
When we talk about AI in the present day, what we are talking about is artificial intelligence designed to specifically achieve one goal. We call this narrow intelligence. Narrow AI uses a learning algorithm to perform a single task, and over time uses gained knowledge to perform that one task better.
A good example of narrow AI is DeepMind’s original Alpha Go, whose singular purpose was to improve its ability to play the board game Go.
Side note: the “other type” of artificial intelligence within AI theory — artificial general intelligence (AGI), which can apply learned knowledge to other tasks, and mimic complex human thought — is not, as far as we know, currently in existence.
Ok, so we know that humans in general prefer familiar music. And we also know that today’s AI is designed to optimize towards one specific goal.
Now let’s return to our counterfactual. If the design and production of FN Meka had been fully automated by AI, could we reasonably expect a different end product?
Here we have a narrow AI, designed by a commercial music business, for which we can assume a specific commercial goal has been programmed. Like any other commercial music business, it isn’t a stretch to assume that the goal would be to create the most commercially appealing product.
We, the consumers, have a strong preference for the familiar. So — if you were the AI, how would you design the product (in this case, the synthetic media artist)?
A) Design the artist using a random amalgamation of pan-cultural aesthetic traits, and an equally random amalgamation of artistic motifs and ideas? Like dipping your hand, blindfolded, into a large bucket containing millions of different food flavorings, ranging from strawberry to sewage, from blueberry to bogie, and then tipping the first 5 substances that come to your hand into your casserole dish. (Anyone for a sewage and bogie-flavored stew?)
Or would you:
B) Design the artist using a tried and tested combination of aesthetic traits and artistic motifs, a combination proven over recent history to resonate strongly with the commercial target audience? More like dipping your hand, eyes open, into the same bucket of flavorings, and carefully selecting the 5 flavors that you feel would blend well in your stew.
I feel quite confident that I know what you would do, what we would all do.
Similarly, if the AI’s goal is to create a synthetic artist that is commercially attractive and successful, and the AI is working well, then ultimately the AI should create — or rather recreate — an artist that is familiar to us. Familiar in how they look. Familiar in how they behave, sing, and talk.
The AI should create someone like… FN Meka.
Someone who looks and sounds familiar.
Someone who has certain notable points of difference, but as a whole package fits with our expectation of what a rap star is.
Someone with a clear brand that easily enables potential commercial partners to identify product-market fit.
This might be an unpopular idea, particularly given the unpleasant-tasting cultural backdrop of artist exploitation and cultural appropriation. But from an objective viewpoint informed by peer-approved music psychology research into what we like, and an understanding of how AI systems are designed to achieve goals, it is a simple and unavoidable truth. A truth that, as we venture further into the realm of AI-generated synthetic media, is likely to keep resurfacing its irritating little head.
How we can safely navigate the world of AI-generated synthetic media
An interesting paradox relating to how humans perceived artificial intelligence:
Because of our faith in systems, and perhaps also our awareness of human fallibility, humans tend to trust artificial intelligence with making crucial decisions more than they trust other human beings. At the same time, it is our lack of understanding of those same AI systems, our inability to relate to or empathise with the AI’s “thought process,” that leads us to give the AI more leeway when errors are made.
We intuitively sense that we cannot reasonably be outraged at the AI system directly. (Plus, as anyone who has played chess or football against a computer knows, screaming at a computer is a sure-fire path to deep feelings of foolishness.)
So, in this counterfactual world where our synthetic virtual artist has genuinely been created and operated by an AI system, should we be outraged? If so, who should we be outraged at? And what should we expect of them as mitigation and compensation?
As will become apparent if you browse through the AI-generated music available on the internet, we still have a considerable way to go until AI can generate convincing and ultimately stimulating music.
We are however swiftly tumbling into a world peppered by synthetic media. A world where our human brain is running at a deficit to technological advancement in its ability to decipher whether what we see and what we hear came from an organic or a synthetic source. Take 5 minutes to watch the music video for “The Heart Part 5” by Kendrick Lamar. Check out the work of companies like Synthesia, who are making advancements with the creation of synthetic audio using text-to-speech technology.
This is the world we live in today.
Let’s call a spade a spade. The music industry is a commercial terrain. People and players operate to generate profit. Record labels, producers, merchandisers, events companies, digital media companies — everyone is incentivized to put out products they think will be commercially appealing and monetizable while paying minimal attention to “ethical” considerations such as cultural ownership or consumer safety.
As synthetic media becomes productized, commercialized, we therefore shouldn’t be surprised when a commercially-driven AI creates a product that we, as consumers, show a preference for. It is simply what we, as a mass audience, want.
However, and this is a big however, my point is not that we should simply accept everything business owners decide to create, launch, and monetize, simply on the grounds that AI-generated synthetic media is a new territory and that our understanding of how complex AI systems work is often limited.
My point is that, being realists, products like FN Meka are unfortunately just what we should expect. “Give the people what they want. The customer is never wrong. Money talks” and all that.
The bottom line is that if we in future want to avoid being offended and outraged by culturally tone-deaf products like FN Meka, it is clear that we need to introduce additional guardrails to the rules of play.
Ethics is no longer about who. It's about how.
Commenting on the FN Meka scandal, Vice says: “At the core was a question of ethics: who contributed to what the first AI rap star would look or sound like?”
AI ethics is a nascent field, marked at present by a gap between its generally consequentialist theoretical framework, and the reality of how little we understand about how AI systems operate.
The so-called “black box problem” within AI — whereby due to the great complexity and low transparency of AI models, even expert analysts are unable to understand its process or how the end result was produced — shines a light on the problematic expectation for business owners across all industries to effectively moderate and control the output of AI under their supposed control.
The question here is: what then can we expect business owners to be fully in control of? And what responsibility should we therefore assign to the business owner vs any other involved party?
Looking briefly at a famous legal case within AI law, one involving Uber’s autonomous automobile division, we can see that the answer to this question isn’t always clean-cut.
Traditionally, US law establishes that AI systems can not be responsible for their actions. The ruling from the fatal accident from Uber’s autonomous automobiles test, where a pedestrian pushing a bike was hit and killed by the car due to the car’s AI system not recognising the human properly and therefore not initiating the braking algorithm, highlights this. Only the co-driver, the party we could classify as the “direct controller,” was held legally responsible and charged. Not Uber. Nor the AI they developed.
To avoid getting sucked into the pertinent “shouldn’t Uber as the manufacturer/designer be held legally responsible?” debate, let’s focus back in on the FN Meka case — a circumstance where the manufacturer/designer, and the direct controller, are conveniently the same party: Factory New.
I believe that the answer here is much clearer.
If an AI system is designed with a clear commercial goal and is programmed to optimize towards that goal, we cannot reasonably expect the AI system to act in alignment with our human moral code of “right” and “wrong.” AI does not have a sense of ethics, of cultural sensitivity, of empathy; it lacks the moral code that moderates our human behavior, warning us about what should be — in our eyes, and likely in the eyes of other people — “right” or “wrong.”
For sure, being one or more steps removed from the creative process should not absolve business leaders of responsibility for their product. But as we enter a world of AI-generated synthetic media, I believe that the color of a business owner’s skin, or the culture from which they originate, ceases to be the primary concern.
Returning to Vice’s statement, it really is no longer about who.
It is about how.
How the people who control an AI design its algorithms in an ethical way.
How they prove that they have made the greatest efforts to understand the AI’s process as fully as possible.
How they have safeguarded the quality and ethical standards of the data being fed into the AI.
In the floating sea of ethics, where not every question can be boiled down to a consequentialist equation of right or wrong, positive or negative, we ultimately have but one recourse:
For us — as business people, as AI specialists, as artists and creators, as artificial intelligence systems — to go about our craft, aligning what we do and why with how we do it.
This is the essence of ethical business. I’m not saying that’s easy. But in a world powered by intelligence that may — no, almost certainly will — far outstrip our own, really what could be more important?