Survey Results

The survey is now closed and as promise I've written up the results of the responses here.

The first thing I'd like to say is a big thank you! There were 326 responses in total which is easily twice as many as I expected to get. I really appreciate everyone taking the time to fill it out and as well as sharing the results with everyone I think it's going to prove really useful in driving future development.

I'm going to go through the survey a little out-of-order to organise the results more meaningfully. I certainly learned something about how to write good questions when setting up this results page, and if I ever run another survey I will do it slightly differently!

Also remember that none of the questions in the survey were required, so I've listed the number of data points for each chart - this gives you an idea of how many people answered. The survey was advertised on twitter, in the release notes and upgrade text, and on the RenderDoc download page. Without putting a pop-up in the program (which would be rather obnoxious) this reached as many people as is reasonable. However it's quite possible that there are other users who either didn't find the time or didn't want to fill out the survey, or never noticed it.

RenderDoc Usage

First up we have some information about how many people have used RenderDoc.

How many respondants have used RenderDoc before.

This is fairly self-explanatory, I asked everyone to fill out the survey whether they had used the tool or not. Obviously this will not cover anyone who hasn't even heard of the tool and never knew about the survey, but I think this has reached a fairly representative cross-section of users. There's also no way of differentiating right now whether those who have used the tool have used it once or twice, or use it regularly. Having said that, the number of people who have used the tool is pretty large and is around about what I expected. At least to the order of magnitude, graphics programming is a fairly specialised subject so it's not a surprise that there aren't thousands of responses.

For those who selected that they hadn't used RenderDoc, they only had to answer two other questions - which debugger they preferred (and why), and then how they heard of the tool. We'll see the latter question in a little bit, but for now let's look at what other tool they're using and why.

Users who have not used RenderDoc - preferred debugger.


Note this question was multiple choice so respondants could select more than one reason.

For users who have not used RenderDoc - why they prefer another debugger.

There are a few interesting bits of information to draw from this. First off, there's a fairly long trail of preferred debuggers - the 'other' chunk is as big as the most popular single answer. This question was as comprehensive as I could make it, so many debuggers had one or two selections. Likewise, the most common single answer was people who didn't use a debugger at all. It's a little hard to tell from the sample size as well as the structure of the questions whether these are graphics programmers who feel there isn't a debugger that is useful for them at all, or whether they don't think a debugger would be useful, or maybe even that they're not working with graphics at all.

However most commonly people noted that they hadn't used RenderDoc either because it was lacking support for their OS or because they didn't have a preferred debugger. From my perspective this is promising as it indicates there are users who will be willing to use the tool once it is more available or featureful and fits their needs.

Next while we're talking about this, let's look at the same question for users who have used RenderDoc. At the end of the survey I asked those users whether they preferred and used RenderDoc most often, or whether they used another tool more often (and why!). About two-thirds of users did indeed say that they used RenderDoc primarily, but here are the breakdowns of the other third:

Users who have used RenderDoc but don't prefer it - preferred debugger.


Note this question was multiple choice so respondants could select more than one reason.

For users who have used RenderDoc - why they prefer another debugger.

We can see here a noticeable shift. For those who've used RenderDoc, their primary tool tends strongly towards console proprietary debuggers, with NVIDIA Nsight as a second big chunk. Similarly lack of OS/platform shift is lower on the list compared to lack of API or feature support. Interestingly it's much more common for people to not have a specific preferred debugger, and instead use whichever one works at the time or has the feature they need.

I think this makes sense - those developers working on Windows (and this is strongly correlated with professional gamedevs) have a much wider variety of options available, since the list of supported tools on other platforms is smaller. In particular when taking console debuggers into the mix, which is the most popular preferred debugger here, this again correlates with professional gamedevs.

The last question in this section was mostly just for curiosity, asking where people had heard about RenderDoc.

Where people heard about RenderDoc.

I don't think the results here are too surprising - more than half of people had heard of the tool through word of mouth (if we include 'at work' by definition as word of mouth). After that the most popular venue is twitter and other online routes (like forums, or simply google).

These results as I say were mostly just for interest and not something I will use for anything in particular. It's an open question whether 'advertising' the tool by doing presentations or trying to reach users through other routes would be productive and would increase the overall user count, or whether this indicates that mostly people learn of such tools via word of mouth. I'll leave that up to the reader to decide!

What I will say is that it's gratifying that people think the tool is good enough to share with others, as after all that is a recommendation of sorts.

APIs and Features

Next I asked about how many people use different APIs and features in RenderDoc.

In a way these questions are the most important, as they indicate where the most development time should go in terms of serving existing users. Of course this is only the proportion of users today. If ew platform support is added (e.g. Linux, mac, phones) that can skew the API proportions, but it gives some idea of where the users are today.

First and most obvious - RenderDoc supports three APIs currently (D3D11, OpenGL, and Vulkan) - which ones are people using?

API usage across RenderDoc users.

This is the first surprising result for me. I expected D3D11 to be highest as after all the tool became known for that and it has been supported the longest, but I was impressed by how many people are using it for Vulkan already, more than are using OpenGL. My expectation was that there would be a falloff left-to-right. This is slightly aided by the fact that until recently it was the only tool in the PC space that supported Vulkan, but I take it as an encouraging sign that interest in Vulkan is already almost half of D3D11 - at least from some perspective. Equally I was surprised that so many people are using it for all 3 APIs.

I don't think there are any shocking course-changes to come here. D3D11 remains an important feature for RenderDoc and it will continue to be supported, as well OpenGL. Vulkan support is already mature and I'm actively working on bringing in new features. When it comes down to an either-or between e.g. OpenGL features and Vulkan features, the weight would be given to Vulkan but not by a huge margin. Also ensuring that users have a consistent and smooth experience across APIs is important - for everyone using 2 or 3 APIs (or more as they are added) the benefits of having a consistent and reliable user interface across them all will add up.

Next I asked about how many people had used specific features, to get an idea of where people spent their time.

Feature usage across RenderDoc users.

While less surprising than the API use, I still found these results interesting. Many of the features are placed where I'd expect them - shader debugging and pixel history used regularly on D3D11, as well as common utility features like debugging texture overlays and vertex picking.

What I didn't predict was how many people would be using the image viewing feature. Since it's not really obvious (RenderDoc is installed as an image handler on windows but not made the default) and not easy to 'discover' if you don't know about it, I expected the use to be lower. Certainly not the most used feature! It's worth going back and looking at it to see if there are any improvements that could be made that aren't too out of scope for the tool (e.g. editing would be way out of scope). Since I imagine most people aren't opening regular jpgs, but rather graphics-specific formats like dds or exr, perhaps being able to recompress from one format to another would be of use. Food for thought at least.

I also don't honestly know what to make of the drop-off between use of CPU callstacks and custom visualisation shaders. I might understand that there are a group of commonly used features and a group of rarely used features, but the grouping doesn't make much sense to me - if I were to have predicted I would have swapped several from one side of the divide to the other.

Next, I asked some of the big questions - which features would users want next? This was a little oddly constructed and in retrospect it should have been phrased differently, but in general I split it into 'which single OS/API/platform support would you like added' and then 'how would you rate these several more functional features in priority'. Since they're quite similar, we'll look at them both together.

Most wished for future feature.


The score is calculated by taking the four answers and assigning scores from 1 to 4. The highest possible value then is about 1000 (if everyone rated it 4).

Relative priority of upcoming planned features.

No surprises that D3D12 dominates the requests for new OS/API support. Many people mentioned D3D12 specifically in the comments section and it seems like many developers are beginning to ramp up on it and are in need of tools. Certainly D3D12 support is on the way and I don't think the results of this survey would have changed that, but it is clearly very important so should perhaps be prioritised up. At least, I don't think it would go down very well if I suggested developers instead switch to Vulkan!

After D3D12 comes native linux support, which is also not really a surprise. Interestingly the third most popular choice was simply 'none of the above'. It's unclear whether these respondants were happy with the current OS/API support, or whether they meant to indicate that more functional features were so significantly more important that in comparison support for other APIs.

We can also see before the slight weighting towards Vulkan rather than OpenGL between the two - the Vulkan features were consistently rated higher than OpenGL (although again, not by much). Also, sorry to folks hoping for D3D9 support - it's not looking good for you and isn't likely to grow in priority!

For features each of them was rated from low to high priority, which I assigned values of 1 to 4. Summing these up across the different features gives a fairly even distribution - certainly there are a few obviously most wanted features and one least wanted (mostly people have switched to 64-bit by now and focus on better compatibility with 32-bit is getting less relevant every day). No one feature though stands above the crowd as called for above everything else. This could perhaps be an artifact of letting everyone choose freely - if someone voted every feature as 4 this just shifts all the numbers and adds no differentiation (no-one did, by the way!).

Lastly I asked about integrating RenderDoc's API.

How many users have integrated RenderDoc's API.

This is another question that doesn't hold any complete shocks but is not what I would have predicted from the outset. A little under half of people are using the integration API in one form or another, with only a handful of people indicating that they would likely never use it. A large number of people didn't know it existed, so it will be interesting to see how the figures change.

My takeaway from this is that I would like to hear some feedback from those people who are using the API, if any of them would like to see more functionality added. Right now it will cover the majority of cases where someone wants to trigger a capture programmatically, but I think there are some really interesting possibilities that would come with even tighter integration and I'd be curious to hear from anyone who had ideas along those lines.

Crashes and Bugs

These questions were my attempt to find out how RenderDoc's stability is in the wild, based on people's impressions.

My perspective on RenderDoc's development can be a bit skewed. Most of the time, I never hear from people for whom the tool is working great and doing what they need. It's hard to be analytical about how stable the tool is when the only actual data points I get are crash uploads and github issues.

The motivation behind these questions were to figure out how people felt the tool was, to give them an opportunity to rate it high or low depending on how stable it was perceived. It's not scientific necessarily but it does have value.

For the next chart I've combined three questions: Rating from 1 to 10 of how often blocking bugs like crashes or corrupted garbage appears; 1 to 10 of how often non-blocking bugs happen like inconveniences, incorrect output or broken individual features; and then finally a general impression of how RenderDoc development is going, whether bugs are fixed swiftly and user feedback responded to. In the actual survey, the rating for bugs was reversed (1 being a good response and 10 being a bad one) but I've fixed that here so the graphs have the same kind of x-axis.


The rating goes from 1 (Very poor development, many bugs encountered) to 10 (Very good development, almost no bugs encountered).

How users rated RenderDoc's stability and development efforts.

I hope people were honest and didn't skew their ratings high just because they knew I would be looking them over - these results are as good as I think I possibly could have hoped for and I'm really glad the impression is as good as it is. Certainly this doesn't mean that I can rest on my laurels, but it does mean that so far the situation is good. You could argue that people who have been so dismayed by lack of stability or poor development didn't even see the survey or feel they wanted to fill it out, but I will choose to believe that this is a reasonably representative sample, at least to some small margin of error.

The rating of non-blocking bugs is somewhat lower than blocking bugs, which is to be expected as hopefully crash bugs get seen, reported and fixed more easily and I have the automated crash reporter to help catch those. There is a curious dip at the '9' rating for both non-blocking bugs and development efforts. I'm not sure what to make of this, except perhaps that it's just a psychological thing of not wanting to give the highest rating and gravitating towards 8, or perhaps wanting to rate one below the rating for blocking bugs to indicate that they are more frequent.

Speaking of the crash reporter, I asked about its use - both how people are using it today, as well as how they'd feel about some ways I could improve it to make more useful bug reports.

How many users have reported a crash with the crash reporter.


Note that some people who answered this question haven't uploaded a crash before. Likely they meant to indicate what they would do if they did hit a crash.

Whether users have or would include contact details with a crash report.

The large majority of people have never used the crash reporter. In retrospect I should have attempted to distinguish between those who have seen crashes but chosen not to report them, compared with those who have never seen any crashes. As it stands it's not possible to determine the difference. However in a practical sense people who chose not to report crashes even anonymously would likely continue to do so, meaning it's not possible to do any better for them.

It's interesting that the majority of responses indicated that they always include contact details. My impression from the other side is that the vast majority of crashes do not include any contact details, and this is one of the reasons why the efficacy of crash uploads is limited. Many times the crash is either already fixed, or needs more information to investigate, and frequently I find that I'm unable to do anything but leave the crash alone for lack of contact details. I don't know how to reconcile thise, although in absolute terms only 40 or so people selected this, and I receive many more crash reports than that - perhaps an indication that those people who upload anonymous reports also don't fill out internet surveys?

One thing I see though is that there are a number of people who aren't anonymous for specific reasons of privacy, but simply because they click through or sometimes forget. These are cases where it is possible to improve the effectiveness of those crash reports which is promising. The next question asked about those options.

Approval of proposed improvements to the crash reporter.

The conclusion here seems to be that most people (about three fifths) would be OK with any of the options described. In each case they would be easily opt-out or suitably anonymous, which I think helps.

The likelihood is that I will implement all three options, since there seems to be no large opposition. Certainly the majority isn't striking in each case, but these are relatively inoffensive features to add. I will need to take care though, since the implication is that two fifths of people are not OK with them being added. If this applies to you then I encourage you to get in touch.

Telemetry

I asked only one question - whether people would be OK with a system of anonymous telemetry.

User opinions on the telemetry/anonymous statistics proposed.

This question outlined a proposed system for anonymous telemetry. The idea being I would be able to automatically some much of the data that I asked about in this survey (e.g. API and feature usage) on a rolling basis. This would let me prioritise on a continual basis as well as get a feel for where development effort was paying off the most - I could find out if a feature was used a lot or only a little, and base future development on that.

The details of the telemetry system are omitted here and will be nailed down more in future, but you can see that more than three-quarters of responses either said they would leave the telemetry on entirely freely, or they would leave it on and require manual validation/verification of the information sent. This is a good thing since without at least a majority of users on the telemetry system it becomes less representative and so less useful.

There were several more options to try and catch people's feelings about the telemetry and whether they were OK with it in principle separate from whether they would remain opted into it. We can see a more coarse breakdown below.

Proportion of users fundamentally opposed or OK with telemetry.

This is a coarse and conservative representation of those who are OK with telemetry being in the tool - whether they would opt-out personally or not. I included everyone who specifically chose that option, as well as everyone who chose 'Unsure'. Many of those may in fact be OK with it, but this is a conservative estimate.

The bottom line is that the telemetry will be implemneted and it seems like there are no large philosophical oppositions. I realise that 3-8% of people or so will not be happy with this decision, but all I can say is that I hope they will feel comfortable simply compiling out the telemetry, or opting out in the UI. I respect people's rights to privacy and everything will of course be open source so it's easy to verify that opting out really is opting out, or build your own as some people indicated they would.

About the users

Lastly I asked a few questions about the users and how they used github

What RenderDoc is being used for.

I don't think this is any big surprise - most people use RenderDoc for game development and while it's not the only use, it's generally the most common case where people are actually doing graphics programming. Likewise most people using the tool are using it at work, but not by a massive margin.

Those using it in non-gamedev fields are in the minority, although interestingly it is more common as a hobby than as a work/primary vocation. At a guess I would imagine this represents people learning graphics programming without a specific gamedev purpose in mind, although since the answers are fairly broad and user-interpreted, this remains unclear. I added this question mostly just for interest so I didn't want to clutter up the survey with detailed demographics questions.

What people have used the github repository for.

The last question just gives some insight into what people are using github for. The most notable thing here is that many people aren't using it at all, which makes sense if they have no interest in the open-source nature of the tool and simply want to download and use it.

My preference would be to have some of those people with no interaction start to move into reporting bugs and requesting features. More feedback is always better - although there is correlation between those reporting no github usage and those giving RenderDoc the highest ratings for stability, there were a sizeable proportion that rated less than perfect. In my mind that means they should be reporting those bugs, to make the tool better!

Conclusion

As I mentioned at the start, I am extremely grateful to everyone who took the time to fill out the survey. Although not included here, there were many very kind comments left at the end of the survey - I read every one and I appreciate the sentiments!

On the same tack, many people talked about specific bugs or features that they wanted but often without sufficient detail for me to reproduce or outline what feature they were after. One of the biggest things that I keep trying to get out to everyone is just how available I am. I know often times that people have bugs that they run into which they then don't report to me, or don't get in touch.

I'm very reactive to user feedback and often when a bug is reported it can be fixed within a couple of days. I can understand that many people are used to the idea that reporting bugs is almost a foregone conclusion especially to company-run projects, where there is the feeling that it can drop into a black hole and it's not worth the effort to report. I fight strongly to shrug off that impression for RenderDoc - if you have a bug, please report it. You can either report it on github, report it to me directly via email, or I will likely be adding an in-UI way to report a bug as well. I'm very willing to work with people on bugs where they cannot share a repro case due to confidentiality, so as long as there is something I can possibly to do either investigate or fix, I will do it.

Likewise feel free to get in touch with feedback along the lines of specific feature requests or improvements. I'm happy to hear about how workflows can be improved, and sometimes the feature is easier to implement than you think!

The bottom line is that I can only fix bugs that I know about, and I would rather have duplicate bug reports and emails to reply to than find someone is running into a bug which I've not seen. With the nature of graphics APIs it's very easy to hit a case where your particular use-case or environment is consistently broken and not working, but all the testing I do and all the other people never run into.

Once again, thank you to everyone who took part in the survey - hopefully these results are of interest, and keep an eye out for future development over at Github or my twitter, and if you've been inspired to get in touch and want to email me then please do!