What's new in Cloud FinOps?

WNiCF - October 2025 - News

The FinOps Guys - Stephen Old and Frank Contrepois Season 6 Episode 1

Send us a text

In this episode of What's New in Cloud FinOps, hosts SteveO and Frank discuss the latest updates from the TBM conference, then moving on to their usual FinOps news, with many AWS news: new EC2 instances, cost management innovations, and sustainability initiatives in cloud services. They explore the implications of Lambda billing changes, real-time bidding technology, and the evolving landscape of cloud financial management training.

Chapters

00:00 Introduction and Conference Insights
02:07 AWS News Highlights
06:45 New EC2 Instances and Performance Improvements
10:41 Data Management and Optimization Tools
14:07 Storage Innovations and Cost Efficiency
18:34 Capacity Management and Kubernetes Integration
21:57 Pricing Changes and Licensing Optimization
27:39 Emerging Technologies and Cost Savings
28:46 AWS RTB Fabric for Real-Time Advertising
30:50 AWS Service Availability Updates
33:19 Sunsetting AWS Services
34:41 Amazon Q Developer: Cost Management Tool
38:27 Kronos 2: Advanced Forecasting Model
39:20 Upcoming FinOps Sessions at reInvent 2025
41:30 GCP Anomaly Detection Features
43:19 Azure's Environmental Sustainability Features
44:54 AWS Customer Carbon Footprint Tool Update
46:21 New FinOps Training Courses


SteveO (00:01.137)
Hello everyone and welcome to this news episode of What's New in Cloud FinOps with myself, Stephen Old and my great friend.

Frank (00:09.294)
Hello.

SteveO (00:10.917)
Hello, Frank. Hello, everyone. I'm in Miami at the moment at the TBM conference, which has been really fantastic. I think we'll talk too much about it, but some interesting main things probably to say, Frank. We didn't talk about this before, and I've just gone and done it. Totally surprising you. There's some really interesting updates from Cloudability app too. Yeah, I've seen web as well. such a nice man. He's absolutely brilliant.

And he was really excited for like yesterday's keynote. So I saw him before that. He's asking me so many things announced. Just make sure you're there. Make sure you listen. And you can really see how IBM is starting to pull together all those acquisitions to come together, which was, which was interesting. Obviously so much talk about AI everywhere. But yeah, it's been, it's been good. Haven't seen much of Miami. I might try to go for a walk later on. And obviously the bigger news.

Frank (00:56.622)
obviously

SteveO (01:08.049)
We actually saw each other a couple of weekends ago with families to me, which was super nice.

Frank (01:11.02)
Yes, yes, that was super cool because we don't see each other very much outside of the podcast. So yeah, no, it was really good. So I went to Lake District with the family, which is a place to pass for half term. So the kids were on holiday. We organized a week out and it was near where Steve lives. So coming back, we stopped there. It was brilliant.

SteveO (01:19.09)
Indeed.

SteveO (01:34.391)
Yeah, it's lovely. Yeah, the kids really got along, didn't they? Yeah, yeah, I know, they've never met you to that end. Yeah, I think your eldest was very surprised by how much my son can talk.

Frank (01:37.204)
Yes, they did! Well, the grown-ups too, come on!

Frank (01:47.884)
Yes, yes, but that's good. He's discovered, know, and you forget how your things were. But yeah, brilliant and well done, Seb.

SteveO (01:56.593)
Right, okay, well the great news for me is there's a lot of AWS news and not loads of other news. So without too much further ado.

Frank (02:09.87)
instances and compute. So Maya is first announcing Amazon ECS managed instances for containerized application. So the idea is that now you can have AWS, so it is Fargate, but then underneath you use EC2 instances, but in the past you needed to configure and manage your EC2 instances. Now they can manage it for you.

So it's a little like Fargate on its own, but it's with EC2 instances and you can start mixing and matching and possibly you can start using your savings plans that are applied. So it's really something that changed quite a lot of how you manage your ECS.

SteveO (02:38.043)
to you.

SteveO (02:47.841)
I wonder if they've... Yeah, it's a bit like a carpenter equivalent, the way you have a little bit more definition of what you'd like it to use, but it's just automating that piece behind the scenes. It's interesting, I was thinking this just the other day. I haven't really heard much from ECS for ages. And then this comes out, there we go.

Frank (03:11.32)
Yes.

Frank (03:16.066)
Yes, and there is some, you can put custom advanced, can say lots of attributes like how much, you want the local storage to be, how you want the GPUs, the memory, there are tons of things that you can use in here and it's gonna choose normally also the most cost effective one. So yeah, anyway, if you are using ECS, that might be something useful. I think it can be also useful if you are using rate optimization and you'd rather have the bigger discount that EC2 provides.

SteveO (03:20.433)
Yeah.

SteveO (03:32.901)
Thanks.

Frank (03:44.302)
for example, then serverless where the discount is lower, but yeah, that's news number one for you. News number two is introducing new compute optimized Amazon EC2C8i and C8i Flex, as we know. So this is the CPU, the C8i and Flex instance offer up to 15 % better price performance and 2.5x more memory bandwidth compared to C7i and C7i Flex.

They are 60 % faster for Nginx web application, 40 % faster for AI deep learning recommendation models, 35 % faster for memcached storeholder. Interesting using memcached on a C instance seems strange, but anyway.

SteveO (04:31.181)
Yeah, well they've released, yeah it does seem strange isn't it, they've released the, well I guess you need the computer to be able to handle dealing with everything from there, yeah, but they've released the metal at the same time I see, sometimes they don't.

Frank (04:44.172)
Yes, they've released the metal 48 XL and 96 XL, maximum of 384 vCPUs, 768 gig of RAM. But the interesting bit is that these work now on the new sixth generation Nitro cards, which are providing double the network and EBS bandwidth. So they've improved their cards. As usual, the flex is...

SteveO (05:01.883)
Yeah.

SteveO (05:07.321)
Nice. Yeah, the flex goes up to 16, doesn't it? 16x large.

Frank (05:12.878)
Yes, and the flex is 5 % better price performance at 5 % lower price. So it's better price performance above what I am. I'm unsure. But yeah, so they are as usually that cheaper. They can reach up to full CPU performance 95 % of the time. I still struggle to understand what are these companies that are using full CPU performance at 100 % of the time.

SteveO (05:31.632)
Hmm.

SteveO (05:39.437)
Yeah. Yeah.

Frank (05:42.37)
But, better, it seems obvious that you would go with the C8iFlex. Maybe there are no spots for C8iFlex.

SteveO (05:52.274)
I'll have a look in a minute. just find it. Why 95 %? Do you know what mean? Like. I just think that could be a lower number for a better discount. There must be a reason. Everything they do is for a reason, Yeah.

Frank (05:59.667)
yeah, no. Yeah, yeah.

Frank (06:09.546)
Absolutely, and usually a financial one. So we said last time that we thought it was because that would bring the cost of the previous generation to the same as the cost of these generations.

SteveO (06:20.603)
Yeah.

Frank (06:21.752)
But so it's available at on-demand, savings plans and spot instances. And so as usual, no RIs. Yeah, and now even C8 and CXFlex. So C8i and C8iFlex instances can be purchased as on-demand, savings plan and spot instances. They're also available in dedicated instances and dedicated hosts. So that was one release.

SteveO (06:27.217)
What flexes?

SteveO (06:38.724)
yeah, you're right.

SteveO (06:44.432)
Yeah.

Frank (06:49.568)
Another one which is slightly different. it's not really... okay. Amazon EC2 auto scaling now support predictive scaling in six more regions. So normally we don't talk about when things are just going to more regions, but I think we've never spoke about auto scaling, predictive scaling.

SteveO (07:05.137)
Unless it was last year, I haven't checked. We went back and checked if we did it this year, unless we talked about last year, because it does feel familiar to me. Let me go back and look at last year's document. Anyway, let's remind you.

Frank (07:09.986)
Possibly.

Frank (07:13.686)
Yeah, but anyway, I just want to highlight if we, I think predictive scaling, so is appropriate for application that experience recurring pattern of steep demand changes such as early morning spikes or when business resume or, and it learns the past patterns and improves auto scaling. So instead of having an auto scaling that just react to what's happening, you can have something that pre-warm things or bring things up. If we, if you did not know about it,

I think it's irrelevant. It's important for you to know about it and that's it.

SteveO (07:51.59)
Yep. It is, yeah, indeed. So whether this went into instances or DB was a bit of a question, but it is about new instances being released for DBs. We've put it in instances. Cloud SQL for MySQL and Cloud SQL for PostgreSQL have both had the C4A machines made generally available. and the Cloud SQL for SQL Server have had an

Frank (07:52.076)
And think the next news is

SteveO (08:21.027)
and for added as well and then generally available you can use the computer engine alpha API or that's something to do with this let me just go find the machine series to let people know a little about them the that's interesting actually let me just go back and look at that so the c4 a is now

it's the C4A and the N4 machines. That's why I was looking both. Right, okay. So the A and D machines and the, are the N's are Intel's aren't they? N4's, let me just check. I'm not going mad. Are both now released. So they basically released a new generation at the same time. I'm trying to quickly compare the pricing. I should have been doing this while I was looking at your news.

Frank (09:01.294)
Mm-hmm.

SteveO (09:12.337)
But I thought it was more interesting. I'll dig that out for later on, you've got a lot of news, so can go do that research and see how they compare. They're not all on the same page like they are with Amazon. I have to go find a different pricing bit to get the pricing out. I'll do that now. So basically, all the fours in those three areas have been released. I just want to check and write for SQL Server, though. It's just talking about the n4, not.

Frank (09:17.058)
Yeah, that's nice.

SteveO (09:42.085)
the SQL A for SQL Server. So that's where the difference lies.

Frank (09:49.486)
OK.

This is DataDB and AI and there a couple of news and they're minor. one is open search. So open search now supports Graviton4 based instances. that the C8G, MHG, R8G and R8GD. The main thing, because it is open search, I would say that you really care is the optimized memory optimized. So the R8 and R8GD. But so they run on the Graviton4, which is 30 % more price per form.

a better performance, not price performance, but it also offers best price performance for compute-intensive or general plus or memory-intensive and open search is that workload respectively. So have a look at that. Those are new instances available for open search. The other one is, and I always forget what that is, but Amazon Clean Rooms launch advanced configuration to optimize SQL performance.

So what I understood is cleanroom is a way that way you can share data either between companies even. And so you can share access to data and databases. In this case is Spark SQL queries. There are new advanced configurations. So good luck with that. So good advanced configuration for Spark, but they will allow you to reduce the cost.

SteveO (11:01.041)
Yeah.

Frank (11:16.578)
reduce costs for complex queries using large data sets. So for example, they say an advertiser running lift analysis for the advertising campaign can specify a custom number of workers for an instance type and configure Spark property without editing the SQL query to optimize costs. So my understanding, we are moving some of the parameters from the query when you can probably use a meta or equivalent tag into a UI that gives you more control.

SteveO (11:47.046)
Yeah, that sounds right. Yeah, so clean rooms is the data clean room that you can create very quickly to work collaborate with any company on AWS or Snowflake. Yeah, I thought the Snowflake was very interesting, but maybe it's because some of that's being used behind the hood. knows? Do you the noise again?

Frank (11:57.134)
Cool. Yes.

Frank (12:08.526)
It's your storage, it's yours.

SteveO (12:09.521)
Oh no, no, I was looking at the pricing. Storage, yes, we moved one to storage, which is this one. No, it's not. Let me open it again. It must be this one, sorry. I'm gonna research. Generally available. Oh, there's misspelled available. I hadn't seen that quite badly as well. I can't even pronounce how they've written it. Azure NetApp file short-term clones.

Frank (12:27.362)
Yes.

SteveO (12:35.665)
I'm going to read some of this one guys because I found this one a little bit confusing. Azure NetApp files short-term clones enable space-efficient instant read-write access to data by creating temporary thin clones from existing volume snapshots, eliminating the need for full data copies and enabling capacity savings. Ideal for software development, analytics, disaster recovery and testing, short-term clones support large data sheets and allow quick refreshes

from the last snapshots. Short-term clones remain temporary and space efficient for up to 32 days, consuming capacity only for the incremental changes. This capability accelerates development and analytic workflows, improves quality and resilience, and reduces cost by avoiding full copy storage and minimizing operational overhead. And that's now generally available. So it sounds to me like you're basically able to live use parts of snapshots.

That's how I've taken it.

Frank (13:36.31)
I think it is more, and I will need to look at it, it's more like what was named as clones, which means that you can read and write. It's like you have a snapshot, but you can read and write on them. So yeah, you have read and write data. So you can really say, I want a copy of production. You can create a copy, which is immediate. It is exactly production. And you can work on it without fear of destroying the original.

SteveO (13:42.193)
Hmm.

SteveO (13:49.307)
Yeah.

SteveO (13:55.213)
Yeah.

Exactly. Yeah, agreed. And at low cost.

Frank (14:03.384)
That was something ZFS was doing brilliantly well. Add the low cost because you only pay for the additional things you're adding. So if you're just using it read-only, you pay almost nothing more, which is fine.

SteveO (14:16.717)
Hmm. Yeah, it's interesting, wasn't it?

Frank (14:19.606)
Yeah, that's the difference between a snapshot and a clone if I remember correctly. That's my good old days.

SteveO (14:23.737)
Yes, it's a clone of snapshot. Yeah, yeah, yeah, that makes sense. Then clones from existing snapshots. Yeah, cool.

Frank (14:31.594)
Next one is visibility. So there is one thing that is interesting is QuickSight is changing and QuickSight is used by lots of us, you know, practitioners because yeah, that's the whatever, that's the most used tool if I remember well, for in particular obviously AWS pricing. So now it's named there is QuickSuite. So there is Amazon QuickSuite, which is an agentic teammate for answering questions and taking action. It is agent

SteveO (14:51.376)
Yeah.

SteveO (14:55.003)
Boo.

Frank (15:01.144)
first, can gen... and then inside you will have QuickSight, which is now not in one word but in two words, and you will be able to use natural language to create your dashboards. And you will be able to ask the agent and then it creates a graph and you can just drop it there. So they've changed everything, but the UI will change the... everything seems to have changed related to QuickSight. So...

really cool. I've not tested it yet but yet if you now see some purple when you turn on QuickSight it's normal you are in Amazon QuickSuite and I think if it is let me check at the end I think they were saying that they would migrate everyone to this so if you are an existing Amazon QuickSight customers you will be upgraded to QuickSuite a unified digital workspace that included

this and so yeah you will be upgraded full stop it's not potential it's not there is no date but you will be upgraded sounds like a threat but anyway yes

SteveO (16:10.659)
It does.

Frank (16:12.654)
Yeah, any comments on that?

SteveO (16:15.697)
No, but I finally did my bit of research on my ones. Yeah, I finally found the right page where can compare the three. So we'll do that when you finish, but no comments on that one specifically.

Frank (16:18.454)
Here we go. You've done it.

Frank (16:25.486)
Next one is monitor, analyze and manage capacity usage from a single interface with Amazon EC2 Capacity Manager. So that's new. EC2 Capacity Manager is a centralized solution to monitor, analyze and manage capacity usage. So you see the service aggregate capacity information with hourly refresh rates and you can see optimization opportunities streamlining Capacity Manager. So what that means is that it's going to show you and when you have

hundreds of instance type and accounts. It's going to show you the difference between on-demand, spot instances, capacity reservation, all in one place. It's going to get the data from Amazon Management Console, Kerfile, CloudWatch, and easy to describe API. So it creates an operational overhead instead of manual data collection. So you can really get tons of stuff. I think it's...

interesting too if you are managing fleets of environment it reminds me the the dashboard we're building in strategic glue or we've built really in strategic glue to start having an understanding at the overall level yes and so you can see how much is covered with civilization how much is spot how much is usage amazon demand what you could do with that and all the stuff and i think they're going to improve it to provide more and more recommendations so

SteveO (17:35.461)
back in the day.

SteveO (17:41.19)
Yeah, yeah.

SteveO (17:51.449)
Yeah, it's an interesting one. think,

There are now get more companies getting to a point where they are starting to look at the usage as a portfolio for various reasons. And I think for me, it's really good that capacity management starts to be thought about because in the AI space specifically, I think people need to start considering constraint again and actually thinking about it as a good thing for various reasons around control and cost management. And this might allow people to do that better.

Frank (18:04.109)
Yep.

Frank (18:27.008)
I hope so. the same time, it's funny because for me, it reminds when you say capacity, you bring me back to my times in the data center. And it's, it is, it might not, it might be simpler to implement, but in the end, it seemed that we are going back in some form or a positive way into much more control, as you say, and AI is pushing that. it's, it's both step forward, step backward on the side. It's quite interesting. I still haven't...

SteveO (18:35.588)
Yeah, yeah, exactly.

Frank (18:56.054)
Yeah, wrap my head around it, but I like your angle.

The next one is split cost allocation data for Amazon EKS support Kubernetes labels. And let me be very clear, am, yeah, I'm not an expert there, but split cost allocation data for Amazon EKS now allows you to import up to 50 Kubernetes custom labels per pod as cost allocation tags. So you can bring your Kubernetes labels inside the curve file.

And that I think is going to extend enormously that core capability to investigate inside Kubernetes. And I think that, yeah, that is going to be really cool.

SteveO (19:40.69)
Yeah, that is really interesting. Hmm. Yeah, that's. And I have to look more into that. The problem is the people I'm working with that have issues with Kubernetes, I've not had any case. And that's why we're having issues because he cares gets. Don't you get an inbuilt part of that? You get kind of open cost of coup costs as part of your case now.

Frank (19:54.402)
Yes.

Frank (20:10.581)
Okay.

SteveO (20:10.897)
That was announced years ago, I seem to remember talking about that just after KubeCost had been acquired by IBM, which I think that was obviously very good timing. But it was speaking to web yesterday, that was part of it. So I wonder if that's little bit of it. I guess this is giving you a next step beyond namespaces, isn't it? Which is needed.

Frank (20:35.116)
Yes, and it's also it helps you, you will be able to know what workload, how it's going to be seen inside the curve. I don't know where is the split, but it is, this is split cost allocations, probably going to be specific new columns. But at the same time, it will allow us to split. So we might have more rows inside a curve file. That's my guess, with more columns, because you will have, yeah.

SteveO (20:49.358)
Hmm

SteveO (20:58.135)
rather than with more columns as well. Have you seen new columns appear?

Frank (21:03.2)
I think, no, but I think split-cost has already columns which are standard. But it's, there are columns which are not usually, unless you really use split-cost. So my guess is it's going to use the split-cost allocation. But as they might be short-lived, the pods might be short, much short-lived than the instances underneath.

SteveO (21:07.665)
I see. I see.

SteveO (21:15.611)
Yeah.

SteveO (21:22.321)
100%. Should be.

Frank (21:23.79)
Yes, it should be. So you will have something there. You will have more things. It's interesting also, they talk about the Kerr query library. And I've been working on a project with a Kerr query library. which I download the Kerr file and do all the Kerr query libraries in one go. takes less than a minute and you get all the results out of it. So I'll share more with you in the future, but I like when it said and QuickSight is written in the wrong, not new format. Shocking.

Cool, think it's music time.

SteveO (21:58.468)
Here, the star is on the wrong screen.

Frank (22:01.25)
There we go, that's pricing. So there was nothing on commitment that I found. So this is pricing. This is an external article from AWS tip. And the idea was to say, except apart from the slightly sensational thing, but AWS Lambda could start billing. So the article is from October 2nd, but the idea is that now you start paying. So in the past, if you had a Lambda function, which was zipped, you would not pay.

the init phase. And now it is billable. And the init phase can, I don't think that's gonna change or should not change your thing. If your init phase take ages, there is a problem. But the interesting bit for me was, for example, is what happens when, for example, there was an outage recently, when there is an outage and my Lambda system will just.

fail and then restart fail and then restart fail and then restart, am I going to pay for all the units of those trying restarts? And there was someone on LinkedIn highlighted and said, hey, on the day of the outage, my lambda cost went through the roof. And in my, the first thing in my head was that could be a cause. So have a look if on the day of the outage, your lambda went through the roof. Have a look.

SteveO (23:05.584)
Mmm.

Frank (23:24.226)
go into the nitty-gritty details and see if it's not that init coming up to play. So AWS for this suggests a minimal 1 % increase for most, they highlight that there is a VPC penalty, which is that the init takes way longer when you need to set an ENI, so an Elastic Network Interface, for example.

SteveO (23:49.583)
You

Frank (23:50.83)
Functions that run infrequently or scale aggressively like high call start rate will have a direct penalty there and large deployments will take longer.

SteveO (23:59.729)
When did this... So this happened on the 20th... Hang on, that's the fabric one. When did this kick in? Because I have a curve which will have...

Frank (24:11.234)
So let me see. So starting August 1st, 2025.

SteveO (24:16.4)
I will check if I've got any lambda in my from back then I Hopefully will and then we can do a comparison

Frank (24:25.89)
So what it highlights there, it's the init, it's setting up runtime, loading code, initializing is fully billable. And in the past, it was non-billable for the majority of zip packaged. It was free.

SteveO (24:32.622)
Yeah.

SteveO (24:38.2)
Yeah, which is how I remind in the one in question. yeah, I will. We might have the data. Yeah. I mean, I think I spend quite in fact, we might have a problem in this that I'm not sure if I go outside the free tier on that one. Yeah.

Frank (24:41.358)
So here we go. You might have 1 % increase.

Frank (24:53.422)
Yeah, that's like me. say, oh yeah, it's going to be one person. I spend $4 a month. Ah, I'm not noticing.

SteveO (24:59.268)
Yeah, exactly. Yeah. And the account I think it's in, spend $1.23. That's, yeah.

Frank (25:04.246)
Yeah, so, yeah, that was an interesting article.

SteveO (25:08.77)
Yeah, it's good. I've not seen that site before. I'm going to follow it.

Frank (25:13.152)
And next, Amazon EC2 now supports optimized CPUs for license included instances. So Amazon EC2 now allows customers to modify an instance CPU option to optimize the licensing costs of Microsoft window license include workloads. So you can now customize the number of vCPUs and or disable upper threading on Windows Server and SQL Server license included instances to save on vCPU based licensing costs.

That's in your region of work, more than mine. But it's valuable for database workloads like SQL Server that require high memory and IOPS but lower vCPU count. So you can modify the CPU option. You can reduce vCPU-based licensing costs while maintaining memory and IOPS performance, achieving higher memory to vCPU ratio and customize CPU settings to match your specific workload requirement. For example,

SteveO (25:56.386)
Hmm.

Frank (26:11.79)
And I think that's important because those costs are massive. On an R7iXLarge running Windows and SQL Server license including, you can turn off IP threading to reduce the default 32 vCPU count to 16, saving 50 % on the licensing cost while still getting the 256 gig memory and the 40,000 IOPS that come with the instance. And this is available in all commercial regions and GovCloud.

SteveO (26:41.188)
Yeah, I'm just checking my team if we're seeing a difference with it. Yeah. No, the licensing, because we do a lot in the licensing space doing independent reviews of how much it will cost to migrate stuff. So I asked firstly, do you guys know about this? In which case they said, yes, we do. But I'm just not in that channel. Because I'm in too many already, you what it can be like.

Frank (26:45.56)
Cool. Or if you are using this, which are you talking about the EKS or you're talking about this licensing? Yep.

SteveO (27:07.664)
And I've asked if we're seeing it make a difference. I think they're doing some modeling at the moment. So that might be something I can update on next time. While we're in pricing, I will therefore jump in with my stuff around the new instances. Interestingly, the C4A seems to be relatively closely matched to the M2. M2 went to another decimal place. So it's slightly different. you can kind of see the pricing by CPU. They break it down in different way in Google. So CPU price is 0.054.

Frank (27:17.11)
Yes.

SteveO (27:37.841)
versus N2 which is 0.0537 and memory is 0.009 versus 0.0091. So it's just a decimal place issue. N4 however is looking quite a chunk cheaper. So we're comparing to 0.054 is 0.0413 per CPU and rather than 0.009 for memory is 0.007.

So it's about 20 % cheaper, I think, the N4s versus both the N2s and the C4As. Yeah, it's quite a chunk. Yeah, it looks that way to me. Like you, I don't know my Google pricing as well as I know my AWS pricing, or in fact, even my Azure pricing. And it is an area I need to kind bone up on more. But yes, I think that follows the trend for memory.

Frank (28:15.598)
So we have new generation which are cheaper.

SteveO (28:35.044)
of the rest of the N4 versus C4A stuff.

Frank (28:38.392)
Okay.

SteveO (28:40.56)
And oh, he had a response. Yes, for certain workloads, it reduces the dependency on provisioning a Z series device. Also see with RDS. There we go. that's on the, that's some news on that. Noise, I think.

Frank (28:50.53)
Perfect.

Frank (28:57.004)
go so we're in savings. And this is something I just there is a word saving. And I didn't even know that existed. So introducing AWS RTB fabric for real time advertising technology workloads. Okay, so it's very specific, but it's a big market. So today we are announcing AWS RTB fabric a fully managed service purpose built for real time bidding. So that's the RTB advertising workloads.

SteveO (29:09.978)
Yeah.

SteveO (29:16.558)
A huge market.

Frank (29:27.074)
The service helps advertising companies, technology companies seamlessly connect to their supply and demand partners, such as Amazon ads, GumGum, Cargo and lots of others that I have no clue about.

SteveO (29:39.906)
Yeah, I've never heard of any of them.

Frank (29:42.144)
And well, Amazon ads I've heard because I was looking at the, yeah, absolutely not. But yeah, so it is, I understand that this needs to be extremely fast because yeah, advertisements, you need to pass information between what you're presenting, are they clicking, not clicking. There's ton of information passing between the person asking for the, for the advertisement, you as an intermediary and the person click on the advertisement, potentially even just seeing it. So

SteveO (29:44.344)
Yeah, I mean the rest of them. Yeah.

SteveO (29:53.711)
Yep.

SteveO (30:01.636)
Yeah.

Frank (30:11.362)
This is new, this is there. And yeah, there is also a diagram for architecture which are request and respond gateways. And I will let whoever understood RTB as an acronym investigate because...

SteveO (30:27.34)
Yeah 80 % lower networking costs compared to standard networking that's massive

Frank (30:30.202)
Yes, That's worse compared to standard networking costs. That's egress for you. So I think that they've had enough big customers in that area asking them to negotiate contracts or whatever to reduce networking. And so they've just standardized it, which makes sense.

SteveO (30:35.214)
Yeah. Yeah, yeah, yeah.

SteveO (30:52.304)
Oh, and there's a good guide on getting started. I'm just looking through it. Yeah, you know, I often say about the Google guides are quite good in those things. This is really neat. It's showing you literally the commands. Yeah, it's cool.

Frank (30:56.95)
Here we go.

Frank (31:04.737)
Yeah, I think this is because we have Betty Zenger writing this and she's a senior developer advocate. Okay, so it's developer centric and so she knows a code really well and she shares that with us. Brilliant. Plus it's bash, which I love anyway.

SteveO (31:09.784)
Yeah.

Yeah, yeah, it's really good. Yeah. Yes, yes, I can read it for a change.

Frank (31:23.079)
Yes! It's not... Gosh, yes, agreed. So... I think we need music maestro.

SteveO (31:31.378)
yes.

Frank (31:33.59)
What's next? there is some, yeah, AWS, I like how they name it, which is why I missed it initially, AWS service availability updates, which roughly tells you what they're going to kick out. And so there are quite a lot of services which are moving to maintenance, which will be no longer accessible for new customers from the 7th of November. So that's over. And so you have S3 object Lambda.

SteveO (31:59.791)
REEEE

Frank (32:03.02)
You have Amazon Glacier because Glacier now is a support of S3. So the old Glacier is not available anymore. S3 Object Lambda is not available. Yes. Amazon Workspace Web Access Client is not... What I'm saying is if you build your technology on that, Migration Hub goes, MindFrame modernization, IoT site-wise edge data.

SteveO (32:14.193)
because it's just part of Lambda now.

SteveO (32:23.577)
Migration Hub?

snowball edge.

Frank (32:30.872)
but the big one also, a snowball!

SteveO (32:33.529)
Yeah, well, snowball edge,

Frank (32:35.522)
Yeah, but it's snowball. It's the thing they send you. So that's over. Now you need to the recommend. I was looking at that one because I remember when snowball came out, we're all... Yeah. So it is you need to move to an AWS environment, an AWS place to transfer your data. It's not AWS sending you stuff to you.

SteveO (32:38.312)
That's all it's not all. Yeah, yeah.

SteveO (32:46.545)
should explore data sync for online transfers, ADS data transfer terminal for physical secure transfers. Okay, so it's bringing in place and explore it.

SteveO (33:04.784)
Yeah.

Frank (33:04.846)
So there is some of the work being passed to the customer. Also, there is this healthomic, I remember being announced two years ago. So I don't know if it's just a subpart, it seems to be a subpart, but yeah, code catalyst is out, code guru. So those are all the things which do not accept more customers, they're still alive. And the following, well, yes.

SteveO (33:20.915)
wow.

SteveO (33:28.335)
Yes, because they don't like to kill things, they? They're very careful about that.

Frank (33:32.45)
So the following services entering sunset and sunset is where they'll end operational support usually in the next 12 months. That's FinSpace, I didn't even know existed. Amazon Lookout for equipment, IoT Greengrass and AWS Proton. We've talked about Proton a couple of times, but so that's It's sunsetting. And the last one is AWS mainframe modernization app testing.

SteveO (33:52.017)
There you have it.

Frank (34:02.486)
is already out of support. quite massive changes. And yeah, we've moved from AWS, I was never changing anything to now they are happy. They have too much. but yeah, even migration hub disappears, which was quite interesting.

SteveO (34:05.411)
No longer available, yeah.

SteveO (34:18.213)
Yeah.

SteveO (34:21.713)
Hmm, that is interesting.

Frank (34:24.366)
So I need to pay attention now to service availability as a header. Next one is Amazon Q Developer now helps customers understand service price and estimate workload cost. So I've not tested it yet. As usual, they were on LinkedIn people, which I am not sure they've tested it, but they were just saying how fantastic that was. But the idea is that you

SteveO (34:28.086)
Yeah, to that list.

SteveO (34:52.539)
Conceptually it is, isn't it? Conceptually it's great.

Frank (34:54.154)
It is brilliant. It's the calculator. You can expect the calculator to be next thing sunsetting because how much does RDS extended support cost or I need to send 1 million notifications per month to email and 1 million HTTPS endpoint estimate the costly cost using SNS or what it you can ask Amazon Q to tell you things which are related to the calculator.

SteveO (35:02.32)
you

SteveO (35:06.329)
Yeah.

Frank (35:23.774)
my first thing so I'm cynical on this kind of I see the wrong app because if it works absolutely brilliant beautiful if it doesn't work if I ask the same question three times and I get three prizes I'm gonna be particularly unhappy or again that it's very interesting but I don't know if I'm gonna need one million notifications what is a good enough number or

SteveO (35:37.978)
Hmm.

Frank (35:50.07)
If I'm going to set up an environment that says how does it scale, those would be the question I would ask. What are the ratios or standard ratios?

SteveO (35:58.865)
The free tier is 50 agentic chat interactions per month, which I would probably use in the first 30, 40 seconds. You can also transform up to a thousand lines of code per month. So I wonder if like, you've built something and then you want to put it onto the Lambda or something, then you'd maybe throw it at Q and you can install it on your ID. So you can put it on VS code, Visual Studios.

Frank (36:00.483)
Yep.

Frank (36:09.87)
And for now we're done.

SteveO (36:26.768)
There's a bunch of lists there. Let me have a look at wider pricing. do you do after that? Conceptually it sounds like a nice thing.

Frank (36:26.958)
you

Frank (36:31.586)
Yeah.

It's brilliant and there is managing your cost using generative AI with Amazon Q Developer as an article that you can search and that will go into more detail. I need to try it out. Some people have spoken finally well, very well of Q so...

SteveO (36:46.958)
Yeah, again, the pricing.

SteveO (36:52.811)
so there's free and pro.

Frank (36:55.288)
Here we go.

SteveO (36:56.592)
So additional, hang on, this is interesting. So additional usage included till, not until, but T-I-L-L, November 2025. For Pro, you can do 4,000 lines of code per month per user pooled at account level. Extra lines of code cost 0.003 dollars per line of code submitted. So depending on how you're using this, could add up quite good. It depends how it iterates.

Where's the pro price? $19 per user per month. Oh, that's all right.

Frank (37:32.694)
Yeah, everyone is on 20. It's Are they making real? Well, I would say AWS for these kind of stuff is 19 is if it's just for cost. That's an LLM looking it's a rag going to just the pricing and doing some math. I think

SteveO (37:34.842)
Yeah, yeah, that's, mean, that's, that's all right.

SteveO (37:47.948)
Yeah. Yeah.

Well, no, because it does more than that, right? That's just one of the things that the developer, oh yeah, this is Amazon. Yeah, Amazon developer does more than that. Yeah, so this is the pricing for using the tool as a whole. It now does the pricing bit like you say, but this is just to have Amazon developer. Yeah, yeah, yeah, it's not just the pricing bit.

Frank (37:57.398)
Yeah, but that was what I was asking.

Frank (38:04.27)
that's brilliant

So that's cool. Yeah, so it's going like the Claude or the Chaji BT or the cursor, et cetera.

SteveO (38:13.648)
Yeah, yeah, 100%. 100%. I will, well, I'm doing something in that I'm going to move to lambda. So I will test it for it. I'll try the free one. should be able to do it. I shouldn't need 1,000 lines. So I will let you know.

Frank (38:21.75)
Yes

Frank (38:26.95)
true. I think I've not put it there, but someone, there was a release and I don't have the link of a new forecasting model by AWS and that you could potentially also, it's an LLM model and you can run it on your own environment if you really want to. So that's something that is interesting. I don't have it on the list in here because completely forgot.

SteveO (38:46.192)
Mmm.

SteveO (38:51.241)
Which one was it? it that you did some research? mean, we're taking.

Frank (38:54.456)
B2B, give me a second, I'll find it for you, because I know I have at least two people telling me, Frank, have you seen that stuff? Which was quite cool. Yes, that was different. Yes.

SteveO (39:04.176)
For that forecasting research you showed me, which tool was it that handled seasonality for forecasting better than the rest of them?

Frank (39:11.992)
So I was using in this case, I was in Prophet, or better, eight algorithm, which are available in Python and one is Prophet and it is the best one. It is for Meta originally. And Amazon is introducing Kronos 2 from univariate to universal forecasting. And the idea is that it can take multiple lines of time series and work them all together. So it is called...

SteveO (39:15.279)
Hmm.

Yeah.

SteveO (39:21.54)
Yeah.

Frank (39:42.462)
So it is available in Kronos 2 Foundation model designed to handle arbitrary forecasting tasks. And it has zero... So you can download in hugging face I think. And other things. Yes, I know, but we're gonna be good.

SteveO (39:58.317)
Nice. I've just realised we're at nearly 40 minutes already. We've been chatting about these things far too much. Yeah.

Frank (40:05.51)
Next one. So let's go. Next one is your ultimate guide to cloud financial management sessions at reInvent 2025. Know before you go. no, I am not going this year. First time in eight or nine years. And yeah, I think I'm gonna enjoy watching it from home in this case not walking.

SteveO (40:13.99)
that's Yeah we're not going this year are we? People keep asking us but we're not going.

Frank (40:30.574)
getting lost to going to the Mandala Bay because I'm always getting lost there with the MTM. So yeah, there are some interesting things. There is, for example, on Wednesday the 3rd of December, you have Corey Quinn and Matt Cousert, which talk about what's new in AWS cost management. The interesting thing is both are not from AWS.

SteveO (40:31.28)
yeah

SteveO (40:36.112)
Mm.

SteveO (40:45.444)
with Matt.

SteveO (40:53.602)
No, Fennox Foundation and Duckbill.

Frank (40:55.518)
Exactly, but then you have other really interesting ones, so learn with experts and peer talk and hands-on experience. And what is interesting is we effectively start seeing Phenops as a name being used all over the place, so that is quite cool. The other, so where am I? So that's next, Phenops guide. So there is also, and that's from AWS,

SteveO (41:11.596)
Rather than saving. Yeah.

Frank (41:23.968)
a FinOps guide to comparing containers and serverless functions for compute.

SteveO (41:30.17)
comparing them as in to make the decision of which one to use.

Frank (41:31.67)
Yes, yes, either container serverless. so what our container, what is serverless and it's, it's correct because if you go to volume, it gets complete crazy. so the car it's, it's quite an interesting one. I, I invite you to have a read, but also again, it says Phenops, which is new for AWS. They were really tired. So since moving and becoming

SteveO (41:50.82)
Yeah, but you know.

Frank (42:00.088)
part of the Phenops Foundation, they are starting to embracing it and yeah it's all good. I'm happy about that.

SteveO (42:07.332)
Yeah, interesting article. Add this to my to read list.

Frank (42:13.608)
Okay and next is yours next one

SteveO (42:17.606)
goodness, I'm trying to add something to my read list. Right, is the anomaly detection is generally available on GCP. So you can view and manage cost spikes that deviate from your typical spend patterns using the anomaly dashboards, which is generally available. Each anomaly includes a detailed root cause analysis that identifies the top services, regions and skews that contributed to the spike.

With this launch, they've added some new features, auto-generated anomaly thresholds that update on a daily basis based on your usage patterns, deviation percentage as a new threshold for you to configure for your own anomalies. Really staggered that I wasn't there before. And email alerts automatically set up for billing administrators to help you. I do remember this was an area that was just generally missing in Google. And I remember, I think,

Frank (43:13.996)
Hmm.

SteveO (43:15.696)
a little while ago. However, the fact that they're auto-generated anomaly thresholds based on usage patterns, are you using machine learning? People are going to call it AI, but it's machine learning. To do that, I think is massively interesting. And there's a good article underneath it as well.

Frank (43:22.147)
Yep.

Frank (43:32.694)
Yeah, there was the call last year or this year with from Victoria at the Phenops X about how they were rebuilding the models on a regular basis to understand what is the higher threshold, the lower threshold. And when something was in those thresholds, fine. When they were going out, it was an anomaly and was detecting. And it seemed that Google is approaching this, but in an automated way, you don't need to think of when to rebuild the model. They just do it.

SteveO (43:37.914)
No.

Yeah.

SteveO (43:56.754)
Yeah, I seem to remember Sarah talking about it. Maybe it was in public preview and this is the GA, but I don't remember. Yeah, I think it's Right noise.

Frank (44:09.026)
and it's yours.

SteveO (44:09.127)
it's me again, crumbs. I just haven't got used to it being me. So this is an Azure update, public preview, environmental sustainability features in Azure API management. So the Azure API management now supports the environmental sustainability features in public preview. These capabilities help an organization minimize carbon footprint of their API infrastructure by making API traffic and policy behaviors carbon aware.

And with these new features, you can shift and load balance API traffic to backend regions with low carbon density, is super cool. Shape API traffic dynamically based on real time carbon emissions in your API management region and build carbon intelligence policies using the new context.deployments.sustainabilityinfo.currentcarbonintensity property to adapt caching telemetry or rate limiting based on emission levels.

These sustainability aware capabilities empower teams to align API operations with corporate sustainability goals and promote more efficient environmental responsibility API and AI workloads. And I just think this is staggering because even at this conference and people or organizations seem to really driving in the opposite direction to sustainability. And then that.

Frank (45:33.814)
Mm-hmm.

SteveO (45:35.535)
Microsoft have released this really cool, really nice feature that is sustainability aware. I think it's really great.

Frank (45:39.886)
Yes, and there is also the next news which was the AWS customer carbon footprint tool now includes scope 3 emission data. Now you'll have to talk with Mark to understand how much accurate or not that is.

SteveO (45:56.086)
Yeah, well, I was sat next to him yesterday when we talking about it and we still think it misses scope too.

Frank (45:59.992)
So.

Yeah, but that's the point, but they are moving towards and I was reading it like through a lawyer's view and it says updated to include scope three emissions and scope one natural gas and refrigerant more complete visibility to allow customers to have more complete visibility. This update expands to cover all three industry standards doesn't say it's totally accurate or whatever.

SteveO (46:28.817)
I mean, it's progress, right? It's progress.

Frank (46:30.862)
but it's progress, that's the point. you can, the data, which is quite funny, historical data is available back to January 2022. So you can go back in time and see.

SteveO (46:41.721)
Wow.

That's when they launched the calculator, wasn't it?

Frank (46:47.584)
Yes, and there were some initiatives, I think, at that time. There was a big boom on sustainability and then that's when ChadGPD came out, roughly. And that we have now. So, but yeah, that's the new things. That's positive. Every step in that direction is very well seen. So, liking it.

SteveO (47:01.146)
Yeah.

Frank (47:14.412)
There we go. there is, so last thing, and it's from AWS, updated cloud financial management digital training courses with a new course added, FinOps for GNI. And so, first of all, you have in skill builders, so they're all one hour trainings, but now you have FinOps fundamentals and strategies part one, FinOps fundamentals and strategy part two, cost optimization solution for FinOps, part one and two.

And now newly added cloud financial management, Phenops for Gen. AI. So there are more and more, again, the word Phenops, there are more and more tools, courses and other things with the Phenops word in them, but also with content. So it's getting there. We're getting there. So, and yeah, I invite you to, you are the expert you were talking earlier.

Have a look at tell me what you think about this FFGVI course, when I work.

SteveO (48:18.321)
will give it a look, I'll have a watch. Yeah, we are done. It's interesting, so yeah, I did a podcast with someone from AWS on Phenoxure AI a month or so ago. Maybe around this time, but it wasn't a random, sorry, Bowen. Sorry, I used to work with a guy called Brandon Woll. So I got my names mixed up. But yeah, I will give it a look.

Frank (48:20.93)
And we're done.

SteveO (48:48.401)
and to let you know what I think. Well, if I think it's good, I'll let you know I think. don't want to come and tell people if think something's bad because that might be mean. I'm sure some people will still find it valuable. But yes, we are done. Sorry that we've gone on to almost 50 minutes today, everyone, but we discussed more of them, really. And there was a lot of news, wasn't there? You had loads. Yeah.

Frank (49:10.242)
Yes, yes, there were lots of news. So thank you listeners for staying with us until there until now.

SteveO (49:17.005)
Yeah, yeah, maybe we should release it as a two-part, who knows. Right, but thank you everybody and we truly look forward to our next episode which we're recording very soon which will be an interview episode and so that'll be released in the not too distant future as well.

Frank (49:33.23)
Absolutely, it's cost accounting we're going to talk, which I didn't even know existed.

SteveO (49:38.258)
Indeed, it is an interesting one. So thank you, everyone. Thank you, Frank and speak to you all soon. Bye bye.

Frank (49:42.37)
Thank you, Steve. Bye, everyone.