Create breathtaking visuals in seconds:
Morpheus effortlessly turns your imaginative concepts into artistic reality using keywords and styles.
Morpheus is an experimental open-source project built by Monadical intended to lower the amount of expertise required to put open-source generative models into the hands of creators and professionals.
This project aims to provide a simplified, user-friendly frontend similar to the Stable Diffusion web UI , with an accompanying back-end that automatically deploys any model to GPU instances with a load balancer.
We’re hoping to get some feedback from users like you! Do you find this useful? Are there features missing from our roadmap that you would like to see? Run into any problems or bugs? We want to hear about it! Contact us via email or file an issue on github .
Frontend
Our hosted version of the Morpheus project supports text-to-image, image-to-image, pix2pix, ControlNet, and inpainting capabilities. Collaborative image editing (MSPaint-style) is also near completion.
Text-to-Image:
Text-to-image uses a diffusion model to translate written descriptions into images, like DALL-E or MidJourney.
Image-to-Image:
Image-to-image applies the same class of models as Text-to-Image, but uses an image instead of text for the starting point. A common use-case would be giving the model a hand-drawn picture and a description to dictate the resulting image.
Pix2Pix:
Similar to Image-to-image, pix2pix starts with an image, but takes further editing instructions in the form of “X to Y”. See more details here.
ControlNet:
ControlNet provides precise control over image generation. By incorporating high-level instructions (or control signals), it allows users to influence specific aspects of the generated images. These can be factors such as pose, appearance, or object attributes. See more details here.
Inpainting:
InPainting allows you to draw a mask over the section of an image you would like changed. The model can remove unwanted elements, replace elements with other content, or add entirely new elements to the image. See more details here.
Backend
The Morpheus project is designed to make deploying a generative image model as simple as possible.
- Go to the Models Info YAML file and update it with the information for the new model:
name: My new model name
description: My new model description
source: Hugging Face model ID
- Run the following command from your terminal:
docker compose run --rm model-script upload local sdiffusion
- Refresh the browser, and the model will be ready to use.
You can find more information on how to do this at here.
Unleash your imagination with Morpheus: Where dreams meet AI-powered artistry. By Ken Fairclough and Dylan Cole, trending on behance, trending on artstation, award winning
The god of dreams creating art highly detailed, artstation, concept art, matte, sharp focus
A vision of the future of art, where human imagination and generative AI intertwine to unlock a new dimension of creativity, symbolized by a key turning in a lock to reveal a vibrant, surreal landscape
Roadmap
Migrate to a plugin-based architecture:
This will allow easy addition, modification, or removal of functionalities within the Morpheus codebase. It will also allow external contributors to propose new features as plugins and expose them in the Morpheus plugin marketplace.
Support for Lora embeddings:
This will allow users to choose from a wide variety of different styles when generating new images.
Integrate ray.io as the model serving engine:
Ray is a framework for scaling AI and Python applications in general. With this integration, the way models are served locally and in production will be unified, and the serving and scaling of models within the system will be improved.
Administrator:
This will allow the addition or removal of new models and styles through a graphical interface, simplifying the process.
Frequently Asked Questions
What is Morpheus?
How does Morpheus work?
How do I get started?
How do I generate an image?
Which services does Morpheus provide?