development

CodeMirror and Spell Checking: Solved

A screenshot of the CodeMirror editor with spelling issues and the browser's spell checking menu opened.

For years I’ve wanted spell checking in CodeMirror. We use CodeMirror in our Review Board code review tool for all text input, in order to allow on-the-fly syntax highlighting of code, inline image display, bold/italic, code literals, etc.

(We’re using CodeMirror v5, rather than v6, due to the years’ worth of useful plugins and the custom extensions we’ve built upon it. CodeMirror v6 is a different beast. You should check it out, but we’re going to be using v5 for our examples here. Much of this can likely be leveraged for other editing components as well.)

CodeMirror is a great component for the web, and I have a ton of respect for the project, but its lack of spell checking has always been a complaint for our users.

And the reason for that problem lies mostly on the browsers and “standards.” Starting with…

ContentEditable Mode

Browsers support opting an element into what’s called Content Editable mode. This allows any element to become editable right in the browser, like a fancy <textarea>, with a lot of pretty cool capabilities:

  • Rich text editing
  • Rich copy/paste
  • Selection management
  • Integration with spell checkers, grammar checkers, AI writing assistants
  • Works as you’d expect with your device’s native input methods (virtual keyboard, speech-to-text, etc.)

Simply apply contenteditable="true" to an element, and you can begin typing away. Add spellcheck="true" and you get spell checking for free. Try it!

And maybe you don’t even need spellcheck="true"! The browser may just turn it on automatically. But you may need spellcheck="false" if you don’t want it on. And it might stay on anyway!

Here we reach the first of many inconsistencies. Content Editable mode is complex and not perfectly consistent across browsers (though it’s gotten better). A few things you might run into include:

  • Ranges for selection events and input events can be inconsistent across browsers and are full of edge cases (you’ll be doing a lot of “let me walk the DOM and count characters carefully to find out where this selection really starts” checks).
  • Spell checking behaves quite differently on different browsers (especially in Chrome and Safari, which might recognize a word as misspelled but won’t always show it).
  • Rich copy/paste may mess with your DOM structure in ways you don’t expect.
  • Programmatic manipulating of the text content using execCommand is deprecated with no suitable replacement (and you don’t want to mess with the DOM directly or you break Undo/Redo). It also doesn’t always play nice with input events.

CodeMirror v5 tries to let the browser do its thing and sync state back, but this doesn’t always work. Replacing misspelled words on Safari or Chrome can sometimes cause text to flip back-and-forth. Data can be lost. Cursor positions can change. It can be a mess.

So while CodeMirror will let you enable both Content Editable and Spell Checking modes, it’s at your own peril.

Which is why we never enabled it.

How do we fix this?

When CodeMirror v5 was introduced, there weren’t a lot of options. But browsers have improved since.

The secret sauce is the beforeinput event.

There are a lot of operations that can involve placing new text in a Content Editable:

  • Replacing a misspelled word
  • Using speech-to-text
  • Having AI generate content or rewrite existing content
  • Transforming text to Title Case

These will generate a beforeinput event before making the change, and an input event after making the change.

Both events provide:

  1. The type of operation:
    1. insertText for text-to-speech or newly-generated text
    2. insertReplacementText for spelling replacements, AI rewrites, and other similar operations
  2. The range of text being replaced (or where new text will be inserted)
  3. The new data (either as InputEvent.data in the form of one or more InputEvent.dataTransferItem.items[] entries)

Thankfully, beforeinput can be canceled, which prevents the operation from going through.

This is our way in. We can treat these operations as requests that CodeMirror can fulfill, instead of changes CodeMirror must react to.

A screenshot of a text field in CodeMirror with text highlighted and macOS's AI Writing Tools providing a concise version of the text.

Putting our plan into action

Here’s the general approach:

  1. Listen to beforeinput on CodeMirror’s input element (codemirror.display.input.div).
  2. Filter for the following InputEvent.inputType values: 'insertReplacementText', 'insertText'.
  3. Fetch the ranges and the new plain text data from the InputEvent.
  4. For each range:
    1. Convert each range into a start/end line number within CodeMirror, and a start/end within each line.
    2. Issue a CodeMirror.replaceRange() with the normalized ranges and the new text.

Simple in theory, but there’s a few things to get right:

  1. Different browsers and different operations will report those ranges on different elements. They might be text nodes, they might be a parent element, or they might be the top-level contenteditable element. Or a combination. So we need to be very careful about our assumptions.
  2. We need to be able to calculate those line numbers and offsets. We won’t necessarily have that information up-front, and it depends on what nodes we get in the ranges.
  3. The text data can come from more than one place:
    1. An InputEvent.data attribute value
    2. One or more strings accessed asynchronously from InputEvent.dataTransfer.items[], in plain text, HTML, or potentially other forms.
  4. We may not have all of this! Even as recently as late-2024, Chrome wasn’t giving me target ranges in beforeinput, only in input, which was too late. So we’ll want to bail if anything goes wrong.

Let’s put this into practice. I’ll use TypeScript to help make some of this a bit more clear, but you can do all this in JavaScript.

Feel free to skip to the end, if you don’t want to read a couple pages of TypeScript.

1. Set up our event handler

We’re going to listen to beforeinput. If it’s an event we care about, we’ll grab the target ranges, take over from the browser (by canceling the event), and then prepare to replay the operation using CodeMirror’s API.

This is going to require a bit of help figuring out what lines and columns those ranges correspond to, which we’ll tackle next.

const inputEl = codeMirror.display.input.div;
	
inputEl.addEventListener('beforeinput',
                         (evt: InputEvent) => {
    if (evt.inputType !== 'insertReplacementText' &&
        evt.inputType !== 'insertText') {
        /*
         * This isn't a text replacement or new text event,
         * so we'll want to let the browser handle this.
         *
         * We could just preventDefault()/stopPropagation()
         * if we really wanted to play it safe.
         */
        return;
    }

    /*
     * Grab the ranges from the event. This might be
     * empty, which would have been the case on some
     * versions of Chrome I tested with before. Play it
     * safe, bail if we can't find a range.
     *
     * Each range will have an offset in a start container
     * and an offset in an end container. These containers
     * may be text nodes or some parent node (up to and
     * including inputEl).
     */
    const ranges = evt.getTargetRanges();

    if (!ranges || ranges.length === 0) {
        /* We got empty ranges. There's nothing to do. */
        return;
    }

    const newText =
           evt.data
        ?? evt.dataTransfer?.getData('text')
        ?? null;

	if (newText === null) {
		/* We couldn't locate any text, so bail. */
        return;
	}

    /*
     * We'll take over from here. We don't want the browser
     * messing with any state and impacting CodeMirror.
     * Instead, we'll run the operations through CodeMirror.
     */
    evt.preventDefault();
    evt.stopPropagation();

    /*
     * Place the new text in CodeMirror.
     *
     * For each range, we're getting offsets CodeMirror
     * can understand and then we're placing text there.
     *
     * findOffsetsForRange() is where a lot of magic
     * happens.
     */
    for (const range of state.ranges) {
        const [startOffset, endOffset] =
            findOffsetsForRange(range);

        codeMirror.replaceRange(
            newText,
            startOffset,
            endOffset,
            '+input',
        );
    }
});

This is pretty easy, and applicable to more than CodeMirror. But now we’ll get into some of the nitty-gritty.

2. Map from ranges to CodeMirror positions

Most of the hard work really comes from mapping the event’s ranges to CodeMirror line numbers and columns.

We need to know the following:

  1. Where each container node is in the document, for each end of the range.
  2. What line number each corresponds to.
  3. What the character offset is within that line.

This ultimately means a lot of traversing of the DOM (we can use TreeWalker for that) and counting characters. DOM traversal is an expense we want to incur as little as possible, so if we’re working on the same nodes for both end of the range, we’ll just calculate it once.

function findOffsetsForRange(
    range: StaticRange,
): [CodeMirror.Position, CodeMirror.Position] {
    /*
     * First, pull out the nodes and the nearest elements
     * from the ranges.
     *
     * The nodes may be text nodes, in which case we'll
     * need their parent for document traversal.
     */
    const startNode = range.startContainer;
    const endNode = range.endContainer;

    const startEl = (
        (startNode.nodeType === Node.ELEMENT_NODE)
        ? startNode as HTMLElement
        : startNode.parentElement);
    const endEl = (
        (endNode.nodeType === Node.ELEMENT_NODE)
        ? endNode as HTMLElement
        : endNode.parentElement);

    /*
     * Begin tracking the state we'll want to return or
     * use in future computations.
     *
     * In the optimal case, we'll be calculating some of
     * this only once and then reusing it.
     */
    let startLineNum = null;
    let endLineNum = null;
    let startOffsetBase = null;
    let startOffsetExtra = null;
    let endOffsetBase = null;
    let endOffsetExtra = null;

    let startCMLineEl: HTMLElement = null;
    let endCMLineEl: HTMLElement = null;

    /*
     * For both ends of the range, we'll need to first see
     * if we're at the top input element.
     *
     * If so, range offsets will be line-based rather than
     * character-based.
     *
     * Otherwise, we'll need to find the nearest line and
     * count characters until we reach our node.
     */
    if (startEl === inputEl) {
        startLineNum = range.startOffset;
    } else {
        startCMLineEl = startEl.closest('.CodeMirror-line');
        startOffsetBase = findCharOffsetForNode(startNode);
        startOffsetExtra = range.startOffset;
    }

    if (endEl === inputEl) {
        endLineNum = range.endOffset;
    } else {
        /*
         * If we can reuse the results from calculations
         * above, that'll save us some DOM traversal
         * operations. Otherwise, fall back to doing the
         * same logic we did above.
         */
        endCMLineEl =
            (range.endContainer === range.startContainer &&
             startCMLineEl !== null)
            ? startCMLineEl
            : endEl.closest(".CodeMirror-line");

        endOffsetBase =
            (startEl === endEl && startOffsetBase !== null)
            ? startOffsetBase
            : findCharOffsetForNode(endNode);
        endOffsetExtra = range.endOffset;
    }

    if (startLineNum === null || endLineNum === null) {
        /*
         * We need to find the line numbers that correspond
         * to either missing end of our range. To do this,
         * we have to walk the lines until we find both our
         * missing line numbers.
         */
        for (let i = 0;
             (i < children.length &&
              (startLineNum === null || endLineNum === null));
             i++) {
            const child = children[i];

            if (startLineNum === null &&
                child === startCMLineEl) {
                startLineNum = i;
            }

            if (endLineNum === null &&
                child === endCMLineEl) {
                endLineNum = i;
            }
        }
    }

    /*
     * Return our results.
     *
     * We may not have set some of the offsets above,
     * depending on whether we were working off of the
     * CodeMirror input element, a text node, or another
     * parent element. And we didn't want to set them any
     * earlier, because we were checking to see what we
     * computed and what we could reuse.
     *
     * At this point, anything we didn't calculate should
     * be 0.
     */
    return [
        {
            ch: (startOffsetBase || 0) +
                (startOffsetExtra || 0),
            line: startLineNum,
        },
        {
            ch: (endOffsetBase || 0) +
                (endOffsetExtra || 0),
            line: endLineNum,
        },
    ];
}


/*
 * The above took care of our line numbers and ranges, but
 * it got some help from the next function, which is designed
 * to calculate the character offset to a node from an
 * ancestor element.
 */
function findCharOffsetForNode(
    targetNode: Node,
): number {
    const targetEl = (
        targetNode.nodeType === Node.ELEMENT_NODE)
        ? targetNode as HTMLElement
        : targetNode.parentElement;
    const startEl = targetEl.closest('.CodeMirror-line');
    let offset = 0;

    const treeWalker = document.createTreeWalker(
        startEl,
        NodeFilter.SHOW_ELEMENT | NodeFilter.SHOW_TEXT,
    );

    while (treeWalker.nextNode()) {
        const node = treeWalker.currentNode;

        if (node === targetNode) {
            break;
        }

        if (node.nodeType === Node.TEXT_NODE) {
            offset += (node as Text).data.length;
        }
    }

    return offset;

}

Whew! That’s a lot of work.

CodeMirror has some similar logic internally, but it’s not exposed, and not quite what we want. If you were working on making all this work with another editing component, it’s possible this would be more straight-forward.

What does this all give us?

  1. Spell checking and replacements without (nearly as many) glitches in browsers
  2. Speech-to-text without CodeMirror stomping over results
  3. AI writing and rewriting, also without risk of lost data
  4. Transforming of text through other means.

Since we took the control away from the browser and gave it to CodeMirror, we removed most of the risk and instability.

But there are still problems. While this works great on Firefox, Chrome and Safari are a different story. Those browsers are bit more lazy when it comes to spell checking, and even once it’s found some spelling errors, you might not see the red squigglies. Typing, clicking around, or forcing a round of spell checking might bring them back, but might not. But this is their implementation, and not the result of the CodeMirror integration.

Ideally, spell checking would become a first-class citizen on the web. And maybe this will happen someday, but for now, at least there are some workarounds to get it to play nicer with tools like CodeMirror.

We can go further

There’s so much more in InputEvent we could play with. We explored the insertReplacementText and insertText types, but there’s also:

  • insertLink
  • insertFromDrop
  • insertOrderedList
  • formatBold
  • historyUndo

And so many more.

These could be integrated deeper into CodeMirror, which may open some doors to a far more native feel on more platforms. But that’s left as an exercise to the reader (it’s pretty dependent on your CodeMirror modes and the UI you want to provide).

There are also improvements to be made, as this is not perfect yet (but it’s close!). Safari still doesn’t recognize when text is selected, leaving out the AI assisted tools, but Chrome and Firefox work great. We’re working on the rest.

Give it a try

You can try our demo live in your favorite browser. If it doesn’t work for you, let me know what your browser and version are. I’m very curious.

We’ve released this as a new CodeMirror v5 plugin, CodeMirror Speak-and-Spell (NPM). No dependencies. Just drop it into your environment and enable it on your CodeMirror editor, like so:

const codeMirror = new CodeMirror(element, {
  inputStyle: 'contenteditable',
  speakAndSpell: true,
  spellcheck: true,
});

CodeMirror v6 will come in the future, but probably not until we move to v6 (and we’re waiting on a lot of the v5 world to migrate over first).

CodeMirror and Spell Checking: Solved Read More »

Integration and Simulation Tests in Python

One of my (many) tasks lately has been to rework unit and integration tests for Review Bot, our automated code review add-on for Review Board.

The challenge was providing a test suite that could test against real-world tools, but not require them. An ever-increasing list of compatible tools has threatened to become an ever-increasing burden on contributors. We wanted to solve that.

So here’s how we’re doing it.

First off, unit test tooling

First off, this is all Python code, which you can find on the Review Bot repository on GitHub.

We make heavy use of kgb, a package we’ve written to add function spies to Python unit tests. This goes far beyond Mock, allowing nearly any function to be spied on without having to be replaced. This module is a key component to our solution, given our codebase and our needs, but it’s an implementation detail — it isn’t a requirement for the overall approach.

Still, if you’re writing complex Python test suites, check out kgb.

Deciding on the test strategy

Review Bot can talk to many command line tools, which are used to perform checks and audits on code. Some are harder than others to install, or at least annoying to install.

We decided there’s two types of tests we need:

  1. Integration tests — ran against real command line tools
  2. Simulation tests — ran against simulated output/results that would normally come from a command line tool

Being that our goal is to ease contribution, we have to keep in mind that we can’t err too far on that side at the expense of a reliable test suite.

We decided to make these the same tests.

The strategy, therefore, would be this:

  1. Each test would contain common logic for integration and simulation tests. A test would set up state, perform the tool run, and then check results.
  2. Integration tests would build upon this by checking dependencies and applying configuration before the test run.
  3. Simulation tests would be passed fake output or setup data needed to simulate that tool.

This would be done without any code duplication between integration or simulation tests. There would be only one test function per expectation (e.g., a successful result or the handling of an error). We don’t want to worry about tests getting out of sync.

Regression in our code? Both types of tests should catch it.

Regression or change in behavior in an integrated tool? Any fixes we apply would update or build upon the simulation.

Regression in the simulation? Something went wrong, and we caught it early without having to run the integration test.

Making this all happen

We introduced three core testing components:

  1. @integration_test() — a decorator that defines and provides dependencies and input for an integration test
  2. @simulation_test() — a decorator that defines and provides output and results for a simulation test
  3. ToolTestCaseMetaClass — a metaclass that ties it all together

Any test class that needs to run integration and simulation tests will use ToolTestCaseMetaClass and then apply either or both @integration_test/@simulation_test decorators to the necessary test functions.

When a decorator is applied, the test function is opted into that type of test. Data can be passed into the decorator, which is then passed into the parent test class’s setup_integration_test() or setup_simulation_test().

These can do whatever they need to set up that particular type of test. For example:

  • Integration test setup defaults to checking dependencies, skipping a test if not met.
  • Simulation test setup may write some files or spy on a subprocess.Popen() call to fake output.


For example:

class MyTests(kgb.SpyAgency, TestCase,
              metaclass=ToolTestCaseMetaClass):
    def setup_simulation_test(self, output):
        self.spy_on(execute, op=kgb.SpyOpReturn(output))

    def setup_integration_test(self, exe_deps):
        if not are_deps_found(exe_deps):
            raise SkipTest('Missing one or more dependencies')

    @integration_test(exe_deps=['mytool'])
    @simulation_test(output=(
        b'MyTool 1.2.3\n'
        b'Scanning code...\n'
        b'0 errors, 0 warnings, 1 file(s) checked\n'
    ))
    def test_execute(self):
        """Testing MyTool.execute"""
        ...

When applied, ToolTestCaseMetaClass will loop through each of the test_*() functions with these decorators applied and split them up:

  • Test functions with @integration_test will be split out into a test_integration_<name>() function, with a [integration test] suffix appended to the docstring.
  • Test functions with @simulation_test will be split out into test_simulation_<name>(), with a [simulation test] suffix appended.

The above code ends up being equivalent to:

class MyTests(kgb.SpyAgency, TestCase):
    def setup_simulation_test(self, output):
        self.spy_on(execute, op=kgb.SpyOpReturn(output))

    def setup_integration_test(self, exe_deps):
        if not are_deps_found(exe_deps):
            raise SkipTest('Missing one or more dependencies')

    def test_integration_execute(self):
        """Testing MyTool.execute [integration test]"""
        self.setup_integration_test(exe_deps=['mytool'])
        self._test_common_execute()

    def test_simulation_execute(self):
        """Testing MyTool.execute [simulation test]"""
        self.setup_simulation_test(output=(
            b'MyTool 1.2.3\n'
            b'Scanning code...\n'
            b'0 errors, 0 warnings, 1 file(s) checked\n'
        ))
        self._test_common_execute()

    def _test_common_execute(self):
        ...

Pretty similar, but less to maintain in the end, especially as tests pile up.

And when we run it, we get something like:

Testing MyTool.execute [integration test] ... ok
Testing MyTool.execute [simulation test] ... ok

...

Or, you know, with a horrible, messy error.

Iterating on tests

It’s become really easy to maintain and run these tests.

We can now start by writing the integration test, modify the code to log any data that might be produced by the command line tool, and then fake-fail the test to see that output.

class MyTests(kgb.SpyAgency, TestCase,
              metaclass=ToolTestCaseMetaClass):
    ...

    @integration_test(exe_deps=['mytool'])
    def test_process_results(self):
        """Testing MyTool.process_results"""
        self.setup_files({
            'filename': 'test.c',
            'content': b'int main() {return "test";}\n',
        })

        tool = MyTool()
        payload = tool.run(files=['test.c'])

        # XXX
        print(repr(payload))

        results = MyTool().process_results(payload)

        self.assertEqual(results, {
            ...
        })

        # XXX Fake-fail the test
        assert False

I can run that and get the results I’ve printed:

======================================================================
ERROR: Testing MyTool.process_results [integration test]
----------------------------------------------------------------------
Traceback (most recent call last):
    ...
-------------------- >> begin captured stdout << ---------------------
{"errors": [{"code": 123, "column": 13, "filename": "test.c", "line': 1, "message": "Expected return type: int"}]}

Now that I have that, and I know it’s all working right, I can feed that output into the simulation test and clean things up:

class MyTests(kgb.SpyAgency, TestCase,
              metaclass=ToolTestCaseMetaClass):
    ...

    @integration_test(exe_deps=['mytool'])
    @simulation_test(output=json.dumps(
        'errors': [
            {
                'filename': 'test.c',
                'code': 123,
                'line': 1,
                'column': 13,
                'message': 'Expected return type: int',
            },
        ]
    ).encode('utf-8'))
    def test_process_results(self):
        """Testing MyTool.process_results"""
        self.setup_files({
            'filename': 'test.c',
            'content': b'int main() {return "test";}\n',
        })

        tool = MyTool()
        payload = tool.run(files=['test.c'])
        results = MyTool().process_results(payload)

        self.assertEqual(results, {
            ...
        })

Once it’s running correctly in both tests, our job is done.

From then on, anyone working on this code can just simply run the test suite and make sure their change hasn’t broken any simulation tests. If it has, and it wasn’t intentional, they’ll have a great starting point in diagnosing their issue, without having to install anything.

Anything that passes simulation tests can be considered a valid contribution. We can then test against the real tools ourselves before landing a change.

Development is made simpler, and there’s no worry about regressions.

Going forward

We’re planning to apply this same approach to both Review Board and RBTools. Both currently require contributors to install a handful of command line tools or optional Python modules to make sure they haven’t broken anything, and that’s a bottleneck.

In the future, we’re looking at making use of python-nose‘s attrib plugin, tagging integration and simulation tests and making it trivially easy to run just the suites you want.

We’re also considering pulling out the metaclass and decorators into a small, reusable Python packaging, making it easy for others to make use of this pattern.

Integration and Simulation Tests in Python Read More »

Designing Unity: The Start Menu

Early on when we began to develop Unity for Workstation, we started to look at ways to give users access to the guest’s start menu. This seemed like an easy thing to solve at first. A month later we realized otherwise. We debated for some time and discussed the pros and cons of many approaches before settling on a design.

We had a number of technical and design restrictions we had to consider:

  • The UI should be roughly the same across Windows and Linux hosts.
  • Start menu contents must always be accessible regardless of the desktop environment on Linux.
  • Need to cleanly support start menus from many VMs at once.

Our chosen design

The design we settled on was to have a separate utility window for representing the start menu. This window can auto-hide and dock to any corner of the screen, or remain free-floating, and provides buttons for each VM. The buttons are color-coded to match the Unity window’s border and badge color. When you first go into Unity, the window briefly shows, indicating where it’s docked.

Unity Start Menu Integration

There are many advantages to this design.

  • You don’t have to re-learn how to use it between platforms or even desktop environments.
  • It’s pretty easy to get to and yet stays out of your way when you don’t need it.
  • All the start menus are easily accessible from one place.
  • The start menu buttons are color-coded to match the Unity windows.
  • Users can control whether the window is docked in a corner or free-floats on the desktop.
  • We have a lot of flexibility for feature expansion down the road.

Why not integrate with the Start Menu?

Since the first Workstation 6.5 beta, I’ve been asked why we chose the design we have instead of integrating the start menu into the notification area or into the existing Applications/Start menus. The idea to do so seems kind of obvious at first, but there are many reason we didn’t go that route.

Let’s start with the host’s Applications/Start menus. This seems the most natural place to put applications, as the user is already used to going there. We began going down this route, until we realized the problems associated:

  • On Linux, not everyone runs GNOME, KDE or another desktop environment with an applications menu supporting the .desktop spec correctly or at all. This means we’d be drastically limiting which desktop environments we could even represent applications in.
  • In the case of GNOME, it would add more clicks to get access to any application (Applications ? Virtual Machines ? VM Name ? Applications). This becomes tedious, quickly. Also, from my tests, adding entries three levels deep doesn’t always appear to work reliably across desktops.
  • In Windows, the situation is just as unclear. People tend to think that Windows only has one Start menu, but in reality, we’d have to support three (Classic, XP, and Vista). For quicker access, we’d need to add something to the root menu, and each of these start menus have slight differences in how we can do this. None of the solutions are even particularly good there, as entries may be hidden from the user to make room for other pinned applications.
  • In summary, where you go to access the start menu contents will be different not just on each OS, but across desktop environments and even different modes of the same environment (on Windows).

What about the notification area/system tray?

Another possibility that has been brought up is to use the notification area and to tie the start menu to an icon there. While this would generally work, it wouldn’t work too well.

  • On Linux, it’s frowned upon to put persistent entries in the notification area. A panel applet could work, but users would have to manually add it, and it would be GNOME or KDE-specific.
  • There’s no guarantee there even is a notification area or even a panel in Linux desktops.
  • On Windows, the icon may be automatically hidden in the system tray to make room for other icons.
  • The icon is such a small area to click on, making it annoying to launch applications quickly.
  • The icon is generally not too discoverable.

Tips and future improvements

While we’ll probably keep our current model, there are definitely improvements I’d personally like to make in some future release. One such possible example is to allow dragging an entry off onto the panel or desktop to create a shortcut/launcher. If you frequently access certain applications, you’d be able to put them wherever you want them for quick access. Clicking them while the VM is powered off would power the VM back on in the background and then run the application.

A lot of this exists already. While there is no automatic launcher creation, you can create your own that run:

vmware-unity-helper --run /path/to/vmx C:pathtoapplication parameters

This is not a supported feature at this time and may have bugs, but in the general case it should work just fine.

Designing Unity: The Start Menu Read More »

Scroll to Top