Midjourney Style Tuner Explained

Last week, Midjourney released its newest—and potentially the most impactful yet—feature, the Style Tuner.

It allows you to create an infinite number of your own, uniquely built Midjourney styles, to then apply them to your prompts.

Quick facts

1

The /tune command is only compatible with Midjourney model Version 5.2.

2

/tune is compatible only with --aspect, --chaos, and --tile parameters. It means no --stylize, --stop, --quality, etc.

3

The /tune command accepts Text prompts, Image prompts, and Multi-Image prompts. However—unlike with /imagine—the latter requires a text component.

4

You can re-use tuned styles without parameter limitations (except for the version limited to --v 5.2), by simply adding their codes to your new prompts like so: --style <code>.

5

/tune spends Fast Hours even in Relax mode. But using tuned styles doesn't require Fast Hours.

6

You can combine various styles codes using hypen as separator: --style <code 1>-<code 2>-<code n>.

Let's now dive into the amazing (and unpredictable!)

How to use Style Tuner

/tune
prompt
colorful luminescent mossy flora --v 5.2

Using /tune is pretty easy, and once you get hold of the workflow, creating new styles becomes a really exciting adventure with little technical effort.

1

Start with a prompt of your choice. And since /tune doesn't show the preview of your initial prompt, I recommend previewing it first with /imagine.

3

Choose the number of style directions (i.e., image pairs) you want to see in your Style Tuner: 16, 32, 64, or 128 pairs.

4

Choose which style mode you prefer: Default or Raw. The effect of Raw option is similar to --style raw in a normal prompt.

!

The images generated use your subscription's Fast GPU time, even if you are working in /relax mode. The default 32 pairs would eat up 0.3 fast hours GPU credits, whereas 16 pairs require 0.15, and the 128 directions option takes a whopping 1.2 credits.

5

After you submit the prompt, Midjourney will return the exact cost of tuning and ask for confirmation.

6

A 32-pair generation usually takes a couple of minutes, after which you will receive a link to your Style Tuner page.

8

Even one pick is enough to generate a new style! But you can also make a choice from each pair. The fewer choices you make, the bolder and the more pronounced is your final style. The more choices—the more diverse and versatile the outcome will generally be.

10

Return to Discord and use the /imagine command, type in a prompt followed by --style <code> parameter, and that’s it!

The results

For my first ever style, I picked eight directions, trying to keep the choices as consistent as possible.

After making my choices, I used the resulting code with the initial prompt, as well as some prompts from the Midlibrary Benchmark

Can you spot which input images in particular influenced the final style?

Applying your tuned styles

/tune
prompt
macro photograph of adorably cute Mainecoon kitten with fluffy cheeks, big round eyes, soft patchy fur, tiny pink nose, white mittens, playful bushy tail, expressive tufted ears --v 5.2

The codes can be used with any prompt. However, the ones within the “realm” of the initial prompt will generally work better.

Here is a quote from the Midjourney team themselves, that might help us better understand tuned styles:

"Styles are optimized for your prompt and may not always transfer as intended to other prompts (ie: a cat style may act unexpectedly for cities, but a cat style should transfer to a dog)"
— from Style Tuner announcement on Discord↗︎

To test this, I used the most temperamental prompt from our Benchmark set: cute Mainecoon cat. It is notorious for being a hard target for style modifiers, many of which stop working when meeting the cat.

The quote above means that a style tuned from a "cat" prompt is better with "cat" prompts. Enter Ultimate The Cat Prompt:

After getting back 64 images of extremely cute (and some bizarre, but still cute) cats, I couldn't help but choose the most adorable ones. After a lot of internal struggle, I ended up with ten pairs.

Imagine my surpise when I ran the resulting style with the cute Mainecoon cat prompt...

Yes, unexpectedly, the Mainecoon inherited very little from the images that went into the tuned style... But what about “non-cat” prompts?

Another dissapointment. There are some signs of some of the originals (mostly in the color scheme), but the overall outcome is pretty random, and the style is indistinguishably generic.

But maybe this style will apply well to the simplest, most basic one-word prompts?

O-okay... Moving on then. :)

I then decided to reshuffle my style directions. I reset the picks, and this time, chose only five images as input.

And even though the results are once again unexpected, this time, the style is much more distinct and present!

Even though they don't resemble any of the initial style directions, the dog and the cat are unique and consistent with this one.

However, it is the lowest number of style directions that delivers the best results. For this final ”cat-test,“ I chose two somewhat contradicting styles with powerful visual features.

Although slightly unexpected again, this new—focused—style brought back coherent, constistent outcome clearly inheriting from the initial directions.

But what about the cat and the dog? Well, I suppose, tuned style works better with somewhat more complex prompts than just one word. :)

As you can see, going too far with the number of directions dilutes the style, makes it too varied, and, thus, blurs its unique features. Going low allows for a more controllable, more distinct style that you can apply to many different prompts with different contexts and subjects.

More Details = Better Styles

/tune
prompt
Manga illustration depicting samurai sword handle. Dynamic strokes, power composition. Fine lines, black-and-wide pencil drawing with thick dark shadows --v 5.2

But what if we try to put a very specific prompt for tuning—as detailed as reasonably possible, describing the visual style, the artistic technique and colors used, and, finally, the context and the subject?

For this test, I went with a very particular set of details and combined it with the specific description of the main character and situation.

Let's go backward and begin with the simplest tuned style possible—by picking the direction out of just one image pair (=only one input image to build a tuned style on)...

...and got quite a diverse result for such a specific sample. However, the fine lines and the overall “shaky” feel, are persistent in almost every generation.

Let's see what happens if we add another direction to the initial style: this time with smooth gradients and washed out tones.

And it immediately made the resulting tuned style more layered and complex.

My next experiment features four directions that are fairly close to each other: bold lines and shadows, pronounced details, dynamic perspective lines, and a very clear monochromatic pallet.

The results are quite diverse, but you can still see the inheritance from the initial images.

Finally, let's pick sixteen directions and see if any details will stick.

The results are quite diverse, but you can still see the inheritance from the initial images (and a lot of details!).

To sum it up, my experiments with Style Tuner showed that the more visual stylistic details I describe in my initial prompts, the better were the outcomes.

Infinite styles

/tune
prompt
space megastructure made of iridescent metal with colossal rotating wheel-like engines. Psychedelic nebula, floating planets, surreal sci-fi --v 5.2

The amazing thing about /tune is that you can produce an infinite amount of styles from just one initial prompt.

Here is a good example. A prompt that produced some very opposing results in some pairs.

For the first try, I picked the directions that are similar structurally and have quite a close color scheme:

This is the first experiment where I could finally predict (to a certain degree) what the outcome would be.

Note how even very focused (only two directions) styles can miss on some prompts.

But let's see what changes, if we add just one more pair into the mix:

Not only the outcome is much closer to the three initial images, but it also became a bit more consistent—compare the Thom Yorkes.

For the final test here, let’s pick four samples: two depicting dark and ominous objects and the other two from the other side of the spectrum—high-key, bright, pastel images.

Et voila—a completely unique style very different from what we’ve seen from that same set before.

Every new choice you add, each change of a pick from the previous iteration might create a dramatically new style.

The (Tuned) Style Roulette

/imagine
prompt
Chernobog --style random --v 5.2

Another interesting feature Style Tuner offers is --style random. It is a way to generate… well, a random tuned style (without even opening the Style Tuner interface or spending your Fast Hours). Let’s see if it’s as great as it sounds!

To kick things off, all you need is an initial text prompt (or Image/Multi-Image Prompt). Simply add the parameter --style random to it and observe the Midjourney magic happening. And by adding a number (16, 32, 64, 128) you specify the number of input directions that Midjourney will use for the new style: --style random-64 (32 being the default).

I started this experiment with one of my favorite benchmark prompts—Chernobog—which gives Midjourney a lot of leeway in interpreting the image. This is because Chernobog doesn't have a well-defined visual identity in modern culture/MJ's dataset (just a theory ¯\_(ツ)_/¯).

As a result, the output from this prompt is often quite unique.

After several first attempts, I found a depiction of Chernobog that appealed to me both character and style-wise. I then copied the tuned style code and applied it to other Benchmark prompts…

What a disappointment that was! While there were some color similarities and occasional recurring brushstrokes, the style lacked distinct character and was largely incoherent.

”Then I ran some more.“ ©

It was the twelfth attempt that finally yielded a style that was both a) good-looking and b) fairly unique. Dark and gritty, it featured a somber color palette, with sprawling, branch-like elements present in almost every generation.

Despite its consistency, the style still felt somewhat generic, and I was on a quest of something truly exceptional!

You know the saying, be careful what you wish for because you might just get it? That's precisely what happened. On the nineteenth attempt, I was greeted with a very intriguing, unexpected, and distinctive interpretation of Chernobog.

Curious about the potential of this new style, I added it to the traditional Benchmark prompts.

And while the consistency here is quite arguable, what stays is the overall craziness of the outcome! So then I thought—what if I apply it to a more random set of prompts? (Brace yourselves.)

The bottom line is: --style random is a fascinating feature. While it seldom returns unique (and even less frequently, consistent) styles, it's still an ocean teeming with potential. You just might discover some real pearls!

Fine-tuning the tune

/tune
prompt
Hayao Miyazaki's flat 2D illustration depicting cute little kitten monk. Extremely detailed anime style --v 5.2

But how would your tuned styles respond to fine-tuning parameters, like --stylize, --chaos, and even --stop?

Stylize

--stylize acts as a dial for the level of flair—or Midjourney's artistic touch—in your picture; lower values keep it closer to the essence of your prompt, while higher values grant Midjourney more freedom to interpret.

For this test, I picked three samples in the Style Tuner:

Here are the results of applying the (rather inconsistent) resulting style to some of the Benchmark prompts:

Choosing this style for our test, I added the --stylize parameter to the cyberpunk character prompt and, starting from the minimum value of 20, gradually raised it to the maximum of 1000. The results speak for themselves.

Quite a range!

Chaos

--chaos ”shakes things up“ by introducing more surprise elements to your prompt, resulting in a wider variety of outcomes that can become quite experimental and art-house in style when the value goes high enough. 🤪

Keeping with the spirit of the parameter name, for this experiment, I made a selection from each of the 32 image pairs, so I won't display them here to save space. Instead, I will just show the four previews for the cyberpunk character prompt with the resulting tuned style, imagined with various --chaos value.

Note how a --chaos 75 setting results in only one image that somewhat resembles the initial generations, with the other three diverging significantly. However, lower values can reveal previously unseen aspects of your tuned style without ”breaking” it.

Stop

--stop essentially signals to Midjourney when to “put down its brush”, allowing you to control the level of detail and polish in the final image.

To see how this parameter works with tuned styles, I’ve adjusted my Miyazaki set, focusing on more painterly directions.

And even though you can clearly see where the resulting image’s features come from, and you can find a lot of these features in paintings...

...it is still not painterly enough. But what if we add --stop to that?

As you can see, it's an effective method for adjusting the 'sharpness' in your image and taming styles (whether tuned or not) that tend to be excessively detailed.

/tune
prompt
elaborate complex intricate cybernetic mechanism. Cords, wires, slots, LED lights, layers of electronics. 1920s scientific photograph with grid overlay, marks and notes --chaos 20 --v 5.2

Stop

For this test, I selected eight style directions from the set, aiming for more illustrative images, even though most were quite far from Miyazaki’s distinctive style:

Here are the results of applying this surprisingly consistent style to some of the benchmark prompts:

Combo styles and Style modifiers

/tune
prompt
Deepdream-inspired minimalist digital glitch art abstraction. Fluid shapes, ethereal lighting effects --chaos 25 --v 5.2

And as if the vast array of options wasn't already impressive, Midjourney further expands the potential with combo styles. Just combine two (or more!) style codes after --style, separated by a hyphen: --style <code1>-<code2>.

For this part of the study, I created an abstract prompt with strongly pronounced visual features, focusing on techniques rather than context orsubjects. I also wanted to get my pairs as varied as reasonably possible—so I added a bit of --chaos to my initial /tune prompt.

Then, my task was to produce three very different yet consistent styles to combine them afterward. So, meet our heroes: The Fine, The Creepy, and The Focused.

The Fine

With largely abstract picks with a color spectrum from bright to washed-out as the input.

The Creepy

With style directions selected for their strangeness and specificity (as opposed to the abstract nature of The Fine), resulting (spoiler!) in largely unsettling style, with a disturbing, physical, almost tactile quality that can be quite frightening.

The Focused

Focused (only two directions picked) yet contrasting selection—one pastel and specific, the other abstract and vibrant.

The Combos

With such distinct contenders in play, it's time to merge them! For each combo in the first round, I used the prompt that shaped all three styles: Deepdream-inspired minimalist digital glitch art abstraction. Fluid shapes, ethereal lighting effects .

Finally, let's see what happens with all three styles combined:

In conclusion, the Style Tuner's combo styles take Midjourney's capacity for creative synthesis to the next level.

What's important, is that the blending process provides a clear view of how styles mix and interact. This adds a layer of predictability and control to the artistic process, letting us create more intentional and nuanced designs.

What about Style modifiers?

A style modifier in Midjourney refers to a name, title, or technique—or a combination thereof—that the AI recognizes from its dataset. By including these terms in your prompt, you signal to Midjourney to apply a specific style to your generation.

For instance, if you add by Pablo Picasso to your prompt, Midjourney will try its best to make Picasso's painting out of it.

Let's explore what happens when we add a style modifier to a prompt with a tuned style. For this experiment, I selected five directions that stood out as the softest and least pronounced in terms of texture…

…which resulted in quite a distinct illustrative style with fine features. Let’s see how it will apply to, let’s say, an artistic technique, a photographer’s and a painter’s styles, and also an abstraction.

And just like that—with only five right direction picks in Style Tuner—you can create a powerful style that not only enhances base prompts but also blends really well with other Midjourney styles!

But what about Image Prompts? Will this style work with an existing picture? Let’s find out.

This little bonus experiment brings me neatly along to the final chapter of this study…

As you can see, it's an effective method for adjusting the 'sharpness' in your image and taming styles (whether tuned or not) that tend to be excessively detailed.

/tune
prompt
elaborate complex intricate cybernetic mechanism. Cords, wires, slots, LED lights, layers of electronics. 1920s scientific photograph with grid overlay, marks and notes --chaos 20 --v 5.2

Stop

For this test, I selected eight style directions from the set, aiming for more illustrative images, even though most were quite far from Miyazaki’s distinctive style:

Here are the results of applying this surprisingly consistent style to some of the benchmark prompts:

Images for prompts

/tune
prompt

We know that /tune accepts Image Prompts and Multi-Image Prompts as input. But how exactly does that function?

You insert a publicly accessible URL of your image (or multiple URLs, each separated by a space) before your text prompt. In doing so, Midjourney generates images based on your text prompt while also taking cues from the image or set of images provided.

ⓘ Typically, Multi-Image Prompts—those that utilize more than one source image—do not require accompanying text. However, with /tune, the rules differ. Even if your prompt includes two or more URLs, you must still include some textual description.

The Stage Show

For my first test, I used some elaborately staged photographs from my archive. By selecting images from the same series, I hoped to “multiply” the influence of the style:

And to emphasize the visual style, I also accompanied it with the following description: surreal theatrical stage, characters in intricate costumes, dynamic poses, illuminated by dramatic, cinematic lighting. Staged photograph depicting atmospheric installation, grand theatre set design. Playful and dreamlike imagery, theatre academia, troubadour's tale. Intensity, live performance mood, theatrical staging (courtesy of Midjourney’s /describe ;)).

In the Style Tuner, I picked five directions that both resonated with the source images and were roughly similar to one another.

Contrary to all my expectations, the resulting style was a far cry from what I had described and “fed” to the Style Tuner:

Nothing matched:not the color pallet, not the mood, and definitely not the level of details.

This made me wonder: what if we explicitly ask Midjourney to consider a specific context while using the tuned style created from that very context?

So, I “optimized” the prompt: staged photograph depicting [subject] on a theatrical stage. Elaborate set design, cinematic lighting. (To conserve space, I'll refrain from repeating this lengthy text in the captions below ¯\_(ツ)_/¯)

Much better! Yes, these are not exact replicas of the original images, but they certainly incorporate many more elements from the original images—such as color, atmosphere, and lighting—than the prompts without specific context.

One might wonder if such an optimized prompt (or Optimal Prompt︎↗︎) would be as effective without a tuned style?

Simple. Contrasty. BW

This time, I went all out, inputting ten portraits with identical visual styles, lighting, and framing; minimalist yet dramatic, characterized by stark contrasts and lack of small details:

I set the Tuner to Raw mode, aiming for more realistic outcomes and minimal Midjourney interference. To the initial /tune prompt, I added the following text: black-and-white photographic close-up portrait against a bright-white background, with dramatic, contrasty side-lighting.

To keep the new style focused, I made selections from just three pairs: choosing images that were straightforward, contrasty, and compositionally fitting.

This time, I was convinced, the style would not let me down—I anticipated a perfect black-and-white close-up portrait of a cool Chernobog on the first attempt!

If you listen closely, you can hear the echo of my expectations falling apart… But that won't stop us, will it?

So I pushed on.

To every prompt in my test set, I added a very simple context: black-and-white photograph depicting [subject]. This time around, I generated each image twice—once with and once without my tuned style. And here is what happened:

And I used a slightly different text to test my tuned style with Image Prompt.

This experiment solidified my conviction that tuned styles, particularly those based on images and explicit descriptions, truly shine when they're given the right context. In essence, Optimal Prompts reign supreme. :)

As you can see, it's an effective method for adjusting the 'sharpness' in your image and taming styles (whether tuned or not) that tend to be excessively detailed.

/tune
prompt
elaborate complex intricate cybernetic mechanism. Cords, wires, slots, LED lights, layers of electronics. 1920s scientific photograph with grid overlay, marks and notes --chaos 20 --v 5.2

Stop

For this test, I selected eight style directions from the set, aiming for more illustrative images, even though most were quite far from Miyazaki’s distinctive style:

Here are the results of applying this surprisingly consistent style to some of the benchmark prompts:

Let's summarize

The Style Tuner, with all its brilliance, is a beast  with character, wild and unpredictable—traits that can be as exciting as they are challenging.

Yet, if you invest time to figure out its workings, patterns emerge. Despite our best efforts to harness it, a certain (probably, high) degree of uncertainty will remain. Tuner has a lot of surprises up its sleeve. But it is this very unpredictability is at the heart of its allure.

Let's quickly go through what we learnt from this lo-ong study:

1

With Midjourney's new Style Tuner, you can generate thousands of unique styles from a single prompt.

2

The fewer samples you select, the more focused and consistent your style will be.

3

The more specific your initial prompt, the more likely the output style will align well with your particular subject or context.

4

If your style deviates from the desired direction, revisit your set and adjust or omit the pairs that may be steering the style away. Copy the new code and test your prompt once more!

5

Image Prompts and Multi-Image Prompts in Style Tuner require a text component. Tuned styles that originate from images tend to work much better when the prompts with which you use them are thematically aligned with that original text part and the source image(s). Othewise, Image Prompt-based tuned styles are largely “unstable” without proper context.

6

There are a lot of exciting ways to experiment with tuned styles:  try Raw mode, add --tile, --chaos, or --aspect to the initial prompt. And when applying a tuned style to another prompt, experiment with --stylize, --chaos, and --stop; try combining your tuned styles, and, of course, don't forget to add existing pictures into the mix with Image Prompts.

Conclusion

Tips and tricks apart, what we are witnessing is the birth of the new Midjourney magic! Thousands of combinations and possible outcomes from just one prompt. An infinite ocean of creative possibilities!

As my friend and Midlibrary teammate Jos perfectly noted: “I don’t remember the last time I wanted to spend so much time with Midjourney.”

P.S.

In recognition of our Patreon community support, we've provided all Patrons with access to Personal Libraries. This feature enables users to save their favorite Midjourney styles from Midlibrary and curate them into Collections.

With the advent of Style Tuner, we're excited to announce our plans for a new Midlibrary app designed to save, organize, and reuse your custom-tuned styles!

If you want to contribute to the development of this innovative feature, please consider supporting us on Patreon:

❤️ Support Midlibrary on Patreon! →

/discuss

If you like our guides and studies, please, consider supporting us. It's thanks to our Patrons that we are able to maintain and develop Midlibrary, create better educational content, and keep it free for all!

Support Midlibrary on Patreon! →

All samples are produced by Midlibrary team using Midjourney AI (if not stated otherwise). Naturally, they are not representative of real artists' works/real-world prototypes.

We'll be grateful for shares and backlinks!

Ver. 2.5.9

Andrei Kovalev's Midlibrary
Midlibrary by
Midlibrary.
Saint Seiya
Andrei Kovalev's Midlibrary
Suggest a style

Midlibrary Catalog grows largely through the contributions of our Community.
Thank you for taking time to share your suggestion!

THANK YOU!

Your submission has been received. 

We will now test it and add it to Midlibrary Catalog if Midjourney recognizes the suggested style(s).
In case you left your email, we will notify you when it happens!


Something went wrong while submitting the form. Please, check that you filled all fields.
We're here to help! If you're unable to resolve the issue, please, contact us.
Report a bug

We do our best to keep this website running as smoothly as possible.
However, stuff happens. Thank you for letting us know about it!

THANK YOU!

Midlibrary Groundskeeper has been notified about the issue
and—if you left your email—will come back to you ASAP.


Onward to Midlibrary →
Something went wrong while submitting the form. Please, check if you filled all fields.
We're here to help! If you're unable to resolve the issue, please, contact us.
Subscribe to Midlibrary Newsletter

Every week we publish a new Midjourney study and a new Editor's Pick.
Receive our newsletter to never miss an important Midlibrary update!

After you subscribe, you will receive one email weekly. We never share your email with anyone outside our team and infrastructure. Don't worry, after signing up, you can unsubscribe from our newsletter anytime.

Thank you for subscribing!

IMPORTANT

  • Please, make sure to add [email protected] in your mailer White List.
  • If you don't receive a newsletter by Sunday morning: please, check your Spam folder.
Onward to Midlibrary →
Something went wrong while submitting the form. Please, check that you filled all the fields.
We're here to help! If you're unable to resolve the issue, please, contact us.
Rename Collection
Create new Collection
Personal Libraries
are available to our
Patreon Community
Andrei Kovalev's Midlibrary Patreon Community

Check out the benefits of becoming a Midlibrary Patron ↗︎

You have just became a Patron, and cannot log in?
1. Please, allow our Technical Department up to 24 hours to set up your Personal Library.
2. You may be using different emails for your Patreon and Discord accounts. If that is the case, please, send your Discord email to [email protected].
3. If you are still having issue, please, inform us via Bug Report form.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.