MSN ETree Redesign
A visual redesign for an MSN feature in which we utilized generative AI imaging.
The visual system team within the wider WebXT studio at Microsoft was approached by product designers on MSN to reassess the visual design of their ETree feature.
The ETree feature was a program within MSN Weather where users could earn points by interacting with the weather app and completing daily tasks within MSN. These points then contributed to growing a virtual tree. Once a user's virtual tree reached level 10, Microsoft would partner with the non-profit Eden Reforestation Projects to plant a real tree on the user's behalf, which focused on restoring degraded mangrove forests in Kenya. This was a gamified rewards experience that also contributed to Microsoft's values around sustainability and environmental causes.
Objective
Move away from using the Microsoft emoji language
The previous design for the ETree feature had been done very quickly to keep the engineering team unblocked. MSN designers used any existing assets they had access to in order to create the visuals for the feature, and ended up using Microsoft's emoji assets for elements like the tree state, UI icons, and background visual.

This was a violation of Microsoft's visual design standards, as emojis are never to be used as UI elements or illustrations.

Align to the established Maximal illustration language
As part of the visual system team, it was our job to create and maintain a unified visual design language across Microsoft's web products (Edge, MSN, and Bing).
Our established language for illustration was called Maximal, which was intended to add expression and delight to our web experiences. The language was defined by the following characteristics:

Incorporate generative AI tools into the visual design process
Our team took this project as an opportunity to explore using gen-AI imaging in our design process. There had been some previous tinkering with services like Midjourney, Dall-E, and Recraft to iterate on icon, logo, and visual design assets, but we hadn't yet made a concerted effort to utilize a tool like Midjourney to generate visual design explorations for an entire feature. This project was a perfect candidate to identify the strengths and weaknesses of the technology and how it could help our studio going forward.
Sprint Plan
We broke this sprint up into two main phases to address the visual design of the ETree experience.

The timeline was mapped out over 5 weeks to give us proper time to explore and execute a design direction and dive in to the use of generative AI tools during our exploration process.

The contributing members from our visual systems team:

Moodboarding/Auditing
We started with an audit of competitor apps for MSN's ETree feature. Most competitors came from the Chinese market, and were provided to us by the MSN team (which was based in Beijing). This was a good way to start to identify common visual themes across similar features.

After searching for more visual inspiration, we began mapping visuals on a scale of cartoon-like to realistic as a way of beginning to define the spectrum of potential visual directions for this feature.

Phase One: Tree Exploration
While we all contributed to the exploratory phase using Midjourney to produce visuals, our junior designer Sophia Choi spent the most time tinkering with prompts and style references to create an abundance of visual explorations. Here is a small sampling of the breadth of generations she created over the course of about a week.

After we felt we had a broad range of promising options, we looped in the wider MSN design team to begin narrowing down the visual explorations to prepare for user testing. We had each team member cast votes on the generations they felt best achieved the fundamental principles of the Maximal visual language:
Bold/rich
Vibrant/playful
Layered/textured

With the input from the team, we took the most popular options and refined them down to 6 main concepts that covered a spectrum from cartoon-like and playful to realistic and literal. With this range of potential directions we were ready to run a user research study.

Phase One: User Testing
The Microsoft Web Experiences Team used an internal tool for user research studies called UX Labs to run side-by-side tests across a global pool of thousands of judges.
The objective with this particular user study was to gauge judge's preference when it came to the visual direction of the E-tree feature.
UXLabs Test One
Each of the tree concepts were placed in context within the ETree frame on a plain background. A multi-concept side-by-side test was run where the judges saw each of the concepts paired against one another and were asked to choose which they preferred. An explanation to provide context about the feature was provided to the judges prior to beginning the test.


Through this data we gathered the following insights:
Judges responded most positively to these 3 concepts:
Cinematic Tree (92.5% win percentage)
Digital Tree (56.9%)
Bubble Tree (50%)Judges responded most negatively to the existing emoji tree design (validation that a visual refresh was necessary)
Judges seemed to prefer more realistic-looking tree shapes and textures over the more stylized and artificially-textured trees
UXLabs Test Two
With the insights from the first test, we ran a second test on a separate pool of judges to further narrow down the directions, slightly altering the background to a more realistic image, and testing 4 variations between the concepts that performed best in Test One.


Through this test we gathered the following insights:
Judges reaffirmed their preference for the more realistic renderings of the tree (statistically significant result for Concept B, the most realistic rendering)
Judges generally did not prefer the more stylized/textured "Bubble Tree" treatment
Phase One: Refinement
Based on the results from the user studies, we proceeded with the "Cinematic" concept by iterating on the design of the tree and how it would scale to all stages of the tree growing process.

We settled on a set that felt realistic with pops of detail and color that enhanced the visual of the tree.

Phase Two: Exploration
With the tree design finalized, we moved on to exploring the background visual for the feature in Midjourney, again iterating along the spectrum from realistic to more dream-like and tactile.

Phase Two: Refinement
When we felt we had some good options, we tinkered with the existing compositions and began applying them in context to see how they held up with the tree in the foreground.

The internal team felt that the tactile treatment for the background added more visual interest by featuring the tree in more of stage-like setting. While the more realistic imagery may have matched more closely with the styling of the actual tree, it felt a bit too much like stock imagery and had a flatness that didn't lend itself well to the feature. Rather than run this treatment through the user-research process, we went with a consensus among the design team to finalize the design.
Final Design
The assets were finalized on a direction that allowed for the background to feel like a stage/platform that lived behind the UI of the feature, and that felt like a stage for the tree asset.
Learnings
This updated design was shared, packaged, and handed off to the MSN design team, and eventually implemented in updates to the Etree feature. While the deliverable was important, the most valuable takeaways from this project were learnings around incorporating generative AI imagery into our design process and how it could continue to be used by our broader studio.
Generative AI is an incredible multiplier for visual design exploration
With just a few designers, and only one dedicated full-time to generating explorations in Midjourney, we were able to create an incredible variety and breadth of visuals to potentially use as art directions. This would've taken a significant investment in resources to generate with 3D artists alone.
Generative AI tools like Midjourney are great at exploring broadly, inefficient at fine-tuning
The advantages with these tools, in their current form, is in the broad exploration phase of visual design, or for projects where there is a looseness to the final assets that are needed. The amount of tinkering with text prompts and style references in order to fine-tune aspects like composition and visual detail is very tedious and time-consuming.
Visual quality and specs are still limited in gen AI tools
The need for specific dimensions, background transparency, and PPI resolution was lacking at that time in Midjourney. This created challenges when it came to creating finalized and optimized design assets.
These tools are evolving very rapidly
From the 5-week time span we began this project to the time we delivered the design to the MSN team, Midjourney had already published an updated model for rendering imagery. These tools are evolving and improving literally day-to-day.
This is a technology that is worth investing in and following as it will continue to change visual design processes and improve the inefficiencies that I mentioned above.







