I understand what you're saying, the thing that worries me though is that the input you get from other technical teams is very hard to verify. Do you intend to measure the development velocity of the teams before and after the platform change takes effect?
In my experience it is extremely hard to measure the real development velocity (in terms of value-add, not arbitrary story points) of a single team, not to mention a group of teams over time, not to mention as a result of a change.
This is not necessarily criticism of Figma, as much as it is criticism of the entire industry maybe.
Do you have an approach for measuring these things?
You're right that the input from other technical teams is hard to verify. On the other hand, that's fundamental table stakes, especially for a platform team that has a broad impact on an organization. The purpose of the platform is to delight the paying customer, and every change should have a clear and well documented and narrated line of sight to either increasing that delight or decreasing the frustration.
The canonical way to do that is to ensure that the incoming demand comes with both the ask and also the solid justification. Even at top tier organizations, frequently asks are good ideas, sensible ideas, nice ideas, probably correct ideas -- but none of that is good enough/acceptable enough. The proportion of good/sensible/nice/probably correct ideas that are justifiable is about 5% in my lived experience of 38 years in the industry. The onus is on the asking team to provide that full true and complete justification with sufficiently detailed data and in the manner and form that convinces the platform team's leadership. The bar needs to be high and again, has to provide a clear line of sight to improving the life of the paying customer. The platform team has the authority and agency necessary to defend the customer, operations and their time, and can (and often should) say no. It is not the responsibility of the platform team to try to prove or disprove something that someone wants, and it's not 'pushing back' or 'bureaucracy', it's basic sober purpose-of-the-company fundamentals. Time and money are not unlimited. Nothing is free.
Frequently the process of trying to put together the justification reveals to the asking team that they do not in fact have the justification, and they stop there and a disaster is correctly averted.
Sometimes, the asking team is probably right but doesn't have the data to justify the ask. Things like 'Let's move to K8s because it'll be better' are possibly true but also possibly not. Vibes/hacker news/reddit/etc are beguiling to juniors but do not necessarily delight paying customers. The platform team has a bunch of options if they receive something of that form. "No" is valid, but also so is "Maybe" along with a pilot test to perform A/B testing measurements and to try to get the missing data; or even "Yes, but" with a plan to revert the situation if it turns out to be too expensive or ineffective after an incrementally structured phase 1. A lot depends on the judgement of the management and the available bandwidth, opportunity cost, how one-way-door the decision is, etc.
At the end of the day, though, if you are not making a data-driven decision (or the very closest you can get to one) and doing it off naked/unsupported asks/vibes/resume enhancement/reddit/hn/etc, you're putting your paying customer at risk. At best you'll be accidentally correct. Being accidentally correct is the absolute worst kind of correct, because inevitably there will come a time when your luck runs out and you just killed your team/organization/company because you made a wrong choice, your paying customers got a worse/slower-to-improve/etc experience, and they deserted you for a more soberly run competitor.
I understand what you're saying, the thing that worries me though is that the input you get from other technical teams is very hard to verify. Do you intend to measure the development velocity of the teams before and after the platform change takes effect?
In my experience it is extremely hard to measure the real development velocity (in terms of value-add, not arbitrary story points) of a single team, not to mention a group of teams over time, not to mention as a result of a change.
This is not necessarily criticism of Figma, as much as it is criticism of the entire industry maybe.
Do you have an approach for measuring these things?