
What's new in Cloud FinOps?
Stephen Old and Frank Contrepois get together to discuss what's new in the world of cloud when it comes to FinOps. There are two monthly episodes, one where we'll discuss the top stories we've found from this month and a second episode where we bring in a friend of the show to talk to us about a topic of their choosing.
What's new in Cloud FinOps?
WNiCF - June 2025 - News
Join us, Stephen Old and Frank Contrepois as we valiantly record the June 2025 news episode of “What’s New in Cloud FinOps"; this time, indoors and largely free of wildlife interruptions. In this news-packed update, we tackle the latest cloud cost and optimisation news from AWS, Azure, and Google Cloud.
Expect witty repartee as we dissect everything from Azure’s flexible memory to Google’s enhanced forecasting models, and even the curious case of the disappearing Microsoft Focus 1.2 schema. Along the way, you’ll learn about Athena’s improved query result management, Dataproc’s zero-scale clusters, and the existential dread of forgotten EBS snapshots.
Whether you’re a FinOps expert, a cloud enthusiast, or simply enjoy hearing two experts try to outdo each other (and occasionally their own tech), this episode is your monthly dose of cloud cost clarity; with a side of British sarcasm. Tune in for news, numbers, and the occasional existential crisis.
Stephen Old (00:00.947)
Hello everyone and welcome to this June 2025 news episode of What's New in Clown Fin ups with myself, Stephen Old and my good friend. This is the first time ever, Frank, that this is our second attempt to record something.
Frank (00:10.338)
Contrepois!
Frank (00:15.67)
Yes, yes, because my bad on the first, my bad. Yes.
Stephen Old (00:18.951)
Well, it was a lovely idea. Frank decided to record it outside. His desk was a tree. And we got quite far. We got onto like the second or third area. Then you disappeared. And I carried on recording in the hope that like the fire alarm episode that you would return. But unfortunately, he didn't. So there was a recording that shall never reach the night of day. The benefit of that is for the first time ever. And Frank, you post about this on LinkedIn. So I was more willing to share it today than I did on when we recorded it last week.
Frank (00:34.018)
Yes!
Frank (00:38.946)
You'll never listen to...
Stephen Old (00:49.029)
You're using a new browser, trying out a new browser and you use that to run the Google.
Frank (00:51.084)
Yes, yes.
Yes, I try to run the Google to go to the page of Google and say hey find me all the FinOpsie News into this and create me something that is based on that other tab which at the document that we use and so the idea is that with that context automatically the AI system had an extended context system so by default it's an agent almost and the tool is the web browser and existing pages and eventually searching for new things so and it
came up with something, yeah?
Stephen Old (01:23.719)
It's done some good ones. It's done some good ones and a couple that weren't, but now we've managed to remove the couple that weren't so people don't have to see me trying to feel around in the dark for what this has to do with FinOps. I also have then, well, I did some of the research beforehand, so I've added a few in that it had missed. But it's done, I mean, it's done a reasonable job for sure. And it will get better. It's hard, isn't it?
Frank (01:42.178)
Well, plus, I think, FinOpsis is hard because you don't know if we were to say, hey, we want everything that has the word cost or billing or it might be that the prompt was not great. I think it would be worth going once into everything and have that one on the side and then say, hey, use these keywords that I use for Slack, AWS and tell me, do you see them? Do you see the same? Well, my work.
Stephen Old (01:51.143)
Yeah.
Stephen Old (02:05.883)
Yeah, we'll keep it goes. I just need to install that on my personal machine and give it a go there. But anyway, let's get on with the news. unfortunately, this is that doesn't mean I'm bouncing around all over the place just because of how awesome that we've got like three different ways of putting things in. But we shall do our very best. And here you go. Here is the sound.
Frank (02:28.362)
Excellent. So this is instances and compute. And the first one is AWS, now generally available Amazon EC2C8GN instance. And so to the C8GN instance, it's powered by the latest AWS Graviton 4 processors, 30 % better compute performance, not price, compute performance and Graviton 3.
which were the C7GN.
Stephen Old (02:59.677)
Hang on, so it's not even price performance. It hasn't even mentioned it at all. Wow.
Frank (03:03.342)
No, no, there's just a performance now. And can offer up to 600 gigabit per second of network bandwidth to take advantage to scale, perform, while optimize the cost of running network intensive workloads. So in theory, you should use that only if you have network intensive workloads, such as network virtual appliance, data analytics, CPU based artificial intelligence, and machine learning AI ML inference.
And I find the CPU based artificial intelligence extremely interesting because I know that Intel is building a new kind of hybrid GPU. So lots of things that GPU are doing can be done partially or some of them specialized by the CPU. So very interesting. But the C8GN instance is up to 48 X large and up to 384 gigabyte of memory and up to 60 gigabyte of gigabit per second of bandwidth.
Stephen Old (03:47.251)
Hmm.
Frank (04:01.472)
in EBS so yeah lots of numbers the point is I'd like to see the prices of these things it's where is it where is it where is it is it maxi again can you can we get the c7 c7g let's see how much is a c7g
Stephen Old (04:02.909)
So.
So is that all?
That's what I miss.
Frank (04:28.202)
here no I don't have the price either later leave it there well we'll find that yeah yeah and by in this case C7 also C7G pricing because I thought I was at the pricing but I'm not all right we got C8
Stephen Old (04:28.691)
Let me find out.
Stephen Old (04:34.011)
C7G pricing, I can get you quite quickly.
Stephen Old (04:48.805)
Yeah, you've got to go through more screens than you've had to go through before, haven't you?
Frank (04:52.11)
Absolutely, capacity reservation. don't, I'm the man promising. Gosh, you need to click. Yes.
Stephen Old (04:55.667)
EC2 on demand pricing. Here we go. So C7G. C7G for what size?
Frank (05:05.14)
I'll pick one. Pick large.
Stephen Old (05:06.355)
Large because everything should have a large. 0.0723 and what do want me to compare it to?
Frank (05:11.854)
I'm gonna go to C8 I think it is for me. So C8G 0.07948 So 0.08 0.072 I
Stephen Old (05:27.123)
0.72
Stephen Old (05:32.158)
0.072, yeah. Yeah, so it's 0.006 larger. So it's what? That's 8%, something like that.
Frank (05:40.384)
Yeah, yeah, and they don't talk anyway. Interesting. So yeah, they are, we're continuing that prices do not increase, but between instances, individual instances, but on generational changes. Yes, they do.
Stephen Old (05:54.085)
yeah and it's remaining with six to five seven percent around that that mark that's just very rough guess isn't it but yeah that's sounds about right.
Frank (05:59.169)
OK.
Frank (06:03.342)
Next, still me, it's Amazon SageMaker AI Training Jobs announced general availability of P6 B200, which you can guess is powered by an Nvidia B200, which is a GPU. yeah, look at that. Twice the performance compared to P5 EN instances for AI training.
Stephen Old (06:16.019)
gotta be expensive.
Frank (06:25.982)
and they feature 8 Blackwell GPUs with 1.4, so 1440GB of high bandwidth GPU memory and 60 % increase in GPU memory bandwidth compared to the P5EN 5th generation Intel processors and up to 3.2TBps of Elastic Fabric Adapter networking. Wow! They're on the Nitro system as usual.
and they probably cost tons of money. Do you have it?
Stephen Old (06:58.151)
Yeah, I'm trying well searching again. There's so many parts of pricing around around SageMaker.
Frank (07:03.894)
Yeah, and it's, they don't, yeah, that's the thing. You say the Amazon SageMaker AI training job. So it's a, it's an instance type under Amazon SageMaker for me, it's reasonable.
Stephen Old (07:15.109)
on demand pricing. we go. So this is the B200.
Frank (07:19.534)
B6B200
Frank (07:26.126)
Peace out.
Stephen Old (07:26.451)
Sorry, I'm scrolling down. That one is not on this page.
Frank (07:32.31)
I think it's only on demand reservation.
Stephen Old (07:35.187)
Right, I'm looking at the on-demand pricing. Oh, hang on.
Frank (07:41.154)
They might not have, it's only in Oregon by the way.
Stephen Old (07:43.973)
Only in Oregon? Okay, well I was looking at Ohio so that might be part of the problem. Oregon.
Frank (07:53.108)
Okay, does some shit.
Stephen Old (07:53.587)
Let me just search B200 to see if it comes up. Here we go. B200, 4x large, is $74.88 per hour. The P5, 48x large, was $36 per hour. The P5e was $39. The P5en was $41. So yeah, it's over double the P5, 48x.
Frank (08:19.81)
Okay, yeah, and they compare it with the P5EN.
Stephen Old (08:23.155)
Okay. Yep. And for that one, is like 80 % more at least is 41 versus 74.
Frank (08:29.3)
Yes, but you have 8 Blackwell GPUs, so if you know that the 8 Blackwell GPUs are going to give you more than twice the performance of the P5EN, or they're going to go much faster, it's exactly what you need, or as many models you are just trying to to cram as much compute power in the smallest possible environment and cost is irrelevant, then well done. That's for you. Otherwise, if you're cost-conscious, yeah, have a look.
Stephen Old (08:45.959)
Hmm.
Stephen Old (08:52.499)
Yeah
I mean, I built apps recently that cost more, sorry, less a month than that cost per hour.
Frank (09:03.676)
wow, what do you mean?
Stephen Old (09:04.338)
Even some, I've built a small language model that costs a quarter of that a month.
Frank (09:12.827)
Okay, yeah, yeah, yeah
Stephen Old (09:14.803)
But I mean, just, well, it costs a quarter of what that costs per hour for a full month at Spong & Powhatan, yeah. So that's a significant right. So the first one for me is, and big thank you to Frank, by the way, for doing so much of the research this month. I have been a bit all over the place with holidays and work travel.
Frank (09:20.343)
Okay.
Stephen Old (09:38.099)
So he's done a lot of the research. The first one is unlocking more power with a flexible memory in Azure SQL managed instance. So basically, you customize the memory to V core ratio in your SQL managed instances, which is obviously Azure. So in standard, so basically, standard series, can now do, as standard, can have memory per core was 5.1. Now the premium series, you can do 7 to 12.
And for the premium series memory optimizer, you can do up to 13.6. I think it starts down at 4.38 for the biggest ones. So yeah, there's lots of variances. So for like the 4V core, can have 7, 8, 10, 12 ratios, which is 28, 32, 40, 48. And then the smallest one, which you can only get on the premium series is 128 V cores for 560 total memory.
And then it has various pricing pieces against it, how it's broken down, which is a billing per GB per hour on top of the default memory. And there's a billable memory, which is the difference. So yes, that is interesting. More flexibility rather than having to go up to a new instance size to get you around potentially. So that is nice. The next one is you can use open telemetry with Azure Functions. And this is an article. I won't go too much on.
how to configure a function app to export log and trace data in an open telemetry format, which could then be ingested. Open telemetry is really quite a good way forward when you've got multiple different forms of data being pulled out and you want to take it to one place to gain some additional benefits. You don't want to have multiple agents. So this is not just an Azure functions really, but this allows you to suck in data in an open format similar to the rest.
You can have look at this in C sharp, Java, JavaScript, PowerShell, Python and TypeScript. It boggles my mind that Java is still an option, but obviously it is. What is next? The next is Google Cloud. So this one is from, what date was it? June the 13th, 2025, generally available. The general purpose C4D machine powered by the fifth generation AMD EPYC processors. That's the Turin ones.
Stephen Old (12:00.753)
Obviously backed by titanium, they're generally available for running mission critical workloads, including web apps, game servers, AI inference, web serving, video streaming, and data centric applications. It is available in standard, high mem and high CPU, which is pretty standard. The next one is also generally available. This was earlier in the month, I want to say. So let me see if can dig down and find it.
actually, this is grabbed it from the previous month, but I can't remember talking about it. Just in case you have, I'll just whip through it. The A3 accelerator optimizer between types now available in additional regions, which includes Belgium and Netherlands, Mumbai and Delhi, and then Iowa, South Carolina, Virginia, Oregon, Texas. Right. And that I believe is all of, hang on. No, this one I found. let's see if mine loads.
Frank (12:29.55)
All right.
Frank (12:50.072)
think there is one.
Stephen Old (12:57.587)
It hasn't. It's gone to the top of the day, which never helps.
Frank (13:00.664)
All
Frank (13:04.824)
So it seems to be modified managed license is a C-managed.
Stephen Old (13:08.333)
yeah, generally available. You can now modify licenses attached to your disks. Previously, licenses on disk resources were immutable. So you had to delete and recreate the disks or engage the support team to make any changes. This feature provides greater flexibility for managing data licenses. You can now append, remove, replace, and view the historic license updates. You can perform in-place license upgrades, such as the Ubuntu to Ubuntu Pro.
using the G-Card CLI and REST and switch from pay-as-you-go to bring your own license models, etc. Review license changes and restrictions and append REL to newer licenses or ELS license to a newer version. So you can do that on the fly now. So that was that update. Thank you for reminding me which one it was. Here's the noise.
Frank (13:59.084)
We talk about D-Data DB's AI, because yes, now we add that one. So Amazon DynamoDB Global Tables with Multi-Region Strong Consistency is now generally available. And that's a big deal. So now you can have DynamoDB, so global, a multi-region strong consistency. it's a...
Stephen Old (14:14.771)
Hmm.
Frank (14:23.114)
it allows you to have exactly the same data. You're guaranteed to have the same data in multiple regions. So it's not potentially consistent. I don't know, there is a technical term, I don't remember it, but overall, with this new capability, you can now build highly available multi-regional application with a recovery point objective of zero, achieving the highest level of resilience. Usually this would cost tons. I don't know what is the price, but...
provide so always available always read the latest data from any region it's also remove the undifferentiated heavy lifting of managing strongly consistent replication so it's ideal for building global application with tricks consistency requirements like user profile management inventory tracking financial transactions
Stephen Old (15:11.357)
Mm-hmm.
Frank (15:14.104)
They are available in lots of places and you can just start by looking at the DynamoDB Developer Guide or visit DynamoDB Global Tables page.
Stephen Old (15:25.651)
Yeah, it's interesting. mean, DynamoDB is very useful and quite powerful, but it, does surprise me that there's, well, maybe this is them trying to push into that global more into the global tool, because for me, it's never quite quite hit those heights.
Frank (15:41.422)
I did the AWS certification for databases and the way it priced DynamoDB is just horrible. It's just horrible. They guarantee performance, but the way they guarantee performance mean that underneath you need to really understand that. I think it was when you reach 13 gig of a table, it's going to create a new instance and all your reservations are going to be divided by two. And it was all over the place. So that is on
Stephen Old (15:49.437)
Hmm.
Frank (16:11.456)
side the feature is really cool the other side is I don't know if that just makes it even more complicated but hey that was the one so global that's done the other one is Athena which I love yes Amazon Athena announces managed query result to streamline analysis workflow so the idea is that the the new it's a new feature that automatically stores encrypt and manage the lifecycle of query result for you at no additional costs
Stephen Old (16:13.554)
Hmm.
Stephen Old (16:18.565)
Maybe.
Stephen Old (16:25.029)
Your favorite. Yeah.
Frank (16:40.866)
So until now, when you were doing a query, you needed to give it an S3 bucket. And in that S3 bucket, it would save all the results of that query, of the queries. So that S3 bucket could go big, very fast. And by default, yes, like, yeah, I had tens of gig, hundreds of gig a day. And, and by default, there was no maintenance. You needed to do the stuff of say, Hey, I want this.
Stephen Old (16:42.461)
Hmm.
Stephen Old (16:46.865)
Yep, it did the work.
Stephen Old (16:53.957)
Yeah, especially some stuff you were doing.
Frank (17:10.67)
cleaned every 24 hours because the reality is you usually read the query result once then they introduce you could save more but overall that was still a pain and so it seemed that now you can now choose to have Athena manage the results data for you this allows you to run queries without first specifying the S3 result location which is good for API calls by the way and show result encrypted and avoid costs from storage storing query results after no longer needed
which usually is just after running the query. So that is quite a cool thing.
Stephen Old (17:47.429)
Nice. OK, next one. This is one that came from here. Let me just make sure I'm on the right one. No, this one. And hang on. I've just pressed the button. Uh-oh. Uh-oh. I've just got onto the link. But it's fine. now I've gone backwards. Right. In BigQuery, you can now forecast multiple time series at once using the time series ID column option.
Frank (17:58.985)
What's happening? What's happening?
Stephen Old (18:13.095)
That is available in the Rima plus x-reg multivariate time series models. You can try this feature with forecast multiple time series with a multivariate model tutorial. And this feature is generally available. This jumped out to us because as when we had our friend, was it Henk? Talking about time series data. That is what all of our data is. So it's your billing export, your curve, et cetera.
This is time series data. Can we use this to better forecast and anything that is saving us time is a phenom's benefit was the idea, but we haven't tested it out, but it's worth having a look and look at the tutorial. If you are putting stuff into a big query, which we certainly were in my previous role, we built our entire billing engine on it.
Frank (18:55.668)
Yes, I think it can be really interesting because all these models are going to be based on the past and if you as the Phoenix Practitioner you can start focusing on the future because the thing from the past works that'd be absolutely amazing. I don't know but yeah that's my hope.
Stephen Old (19:03.955)
Hmm.
Stephen Old (19:10.035)
Yeah.
Frank (19:14.848)
Music might, there we go. That's storage for you. So storage, have Amazon EC2 now enables you to delete underlying EBS snapshots when they registry AMIs. So EBS snapshots were always one of the thing that any Phenaps person needs to have a look at because they're very easy to forget about. You do snapshots and they sometimes that just creates you delete a VM and it's a snap is going to be created as a backup system. And you forgot
to click or unclick the right button it's gonna stay there and when you were deregistering an AMI you had to separately delete it associated EBS snapshot
and that was before. And now you could have, now you can automatically delete EBS snapshot at the time of AMI deregistration. And that's really useful when you have many or they are spread all over the place or whatever. So this capability is available to all customers at no additional costs and is enabled on all AWS commercial regions, including AWS GovCloud, including GovCloud, including China.
Stephen Old (20:10.375)
Hmm.
Frank (20:24.712)
and in the AWR, yeah, in all regions really, yeah, China and GovCloud included.
Stephen Old (20:28.519)
Really?
That is unusual.
Frank (20:34.798)
Yes, so it's probably just a small software change.
Stephen Old (20:38.995)
Yeah, fair enough. Are we on to me? Right, troubleshoot performance issues on Azure virtual machines using performance diagnostics. You can use the performance diagnostics tool to identify troubleshoot performance issues in your Azure virtual machines in one or two modes. Continuous diagnostics, which collects every five seconds. Continuous diagnostics is generally available for VMs and in public, for Linux VMs or on-demand.
Diagnostics helps you troubleshoot ongoing performance issues by providing more in-depth data insights and recommendations that are based on the data that's collected at a single moment. On-demand diagnostics is supported in both Windows and Linux. So the performance diagnostics stores all insights and reports in a storage account that you can configure for short data retention to minimize costs. That's one of reasons it's been pulled through. But it's also worth noting that actually sometimes this can...
help you determine stuff faster than billing data, which takes time. And so it's well worth watching for that. And then straight on to me, there are in Google Cloud billing, enhanced forecasting models for increased accuracy and cost reports. Billing forecasts now better account for seasonality trends, data regularities, and missing data using an enhanced forecasting model that leverages AI to factor in various scenarios such as the following.
Frank (21:38.992)
Yep.
Stephen Old (22:04.069)
Intelligent handling of transient effects caused by known business events, for example, a new workload migrating, causing a usage spike. A deeper understanding of seasonality, for example, various recurring patterns such as daily, weekly, monthly cycles in your spend, or for retailers increases in usage during holiday seasons. And adapting to trends to remain relevant in changing environments, for example, new AI spend. Now this is great. As long as it talks about seasonal.
Frank (22:32.888)
Yeah.
Stephen Old (22:33.139)
and then talks about daily, weekly and monthly. if I've worked with customers who basically are linked to education, and so they drop off a cliff in the summer because they're barely used. And it's got to be able to track bits like that for it to become truly relevant. We're seeing more tools move into the market in this space. Google first to try to capture some of that gap.
Frank (22:42.06)
Mm-hmm. Yep.
Stephen Old (23:00.423)
which is probably for the best because that's normally where new tools are weakest anyway.
Frank (23:00.716)
Yeah, so I think we've jumped also because we were in storage and we're now in visibility.
Stephen Old (23:06.958)
have I gone to the next one? I'm sorry, I...
Frank (23:08.982)
but don't problem, buy on this one, which is quite interesting. I've been building a tool that works on terminal. And one of the things it does, it does a forecasting and it use models. Some of them are standard, so it's easy to replicate back. But then it also uses a profit, which is something from Facebook. And it's very interesting because profit does assumptions, which are completely crazy when you look at them. I think I don't have enough data, but it's, but it takes into account, you can say this is a weekly seasonal, this is a monthly seasonal,
Stephen Old (23:14.323)
What have I done with that storage one? Sorry.
Frank (23:38.956)
this is a yearly seasonal and it should change the thing. I still need to test it extensively but I do think that finally we're getting somewhere with forecasting.
Stephen Old (23:44.187)
Nice.
Nice. yeah, for storage, I was meant to talk about the limit for matches prefix and matches suffix. Lifecycle conditions per bucket is increased from 50 to a thousand. And I guess that means, well, I didn't really grasp what the major benefit of this one was. But I don't know if you had thoughts.
Frank (24:09.206)
Okay. For me it was yeah, no it was mostly because it's a and that was me it's a quarter increase so it means that you can do more usually at the same price I didn't check if the price would go.
Stephen Old (24:15.932)
Hmm.
Stephen Old (24:20.325)
Yeah, I believe that's true. It's about what pulls back though, isn't it? It's about the amount of responses that pulls back. But yeah, and sorry, I've already started with storage, sorry, with visibility. Shall I carry on?
Frank (24:31.83)
Yeah, go with sound.
There we go, visibility. I'm going to start. So now in GA, accelerate troubleshooting with Amazon CloudWatch Investigations. As we just said before, using the logs is a way to get access to information first. And so I think it's important to in this case, if it can accelerate troubleshooting, I can accelerate my investigation on CloudWatch. It's cool. So CloudWatch helps you accelerate operational investigation across your AWS environment to just a fraction of the time.
With a deep understanding of your AWS cloud environments and resources, CloudWatch investigates using AI agent to look for anomalies in your environment, surface related signal, identify root cause hypothesis, and suggest remediation steps, significantly reducing mean time to resolution. And my hope is that it does that also for cost problems and find.
Stephen Old (25:26.067)
Hmm.
Frank (25:29.4)
Find the signals, the root cause, suggest remediation steps, all this kind of stuff. That'd be amazing. If it's not there, please start doing it.
Stephen Old (25:39.699)
And have you got a second one? Yep.
Frank (25:41.036)
Next one. Yeah, I have the second one, which is AWS Invoice Summary API is now generally available.
And so today AWS announces the general availability of the invoice summary API. This allows you to retrieve your AWS invoice summary detail programmatically via SDK. You can retrieve multiple invoice summary details by making a single API call that accept input parameters like account ID, invoice ID, billing period, and date range. This can be absolutely amazing instead of receiving 40, 50, every time you buy, for example, an ROI or savings plan, you get an invoice.
go easily to many a month. So if all of a sudden there is an API that you can call, it aggregates, you do some elaboration, you aggregate it and send it back to finance as just one thing that summarizes it all, I think they'll be extremely happy. I would be.
Stephen Old (26:33.082)
Nice. Indeed. So I've done my first two. So the last one I have in this area is that new fields have been added to the cloud billing data export, and hopefully therefore the re-billing data exports as well for partners. To prepare for the expanding of the spend-based equipment use discounts program, we added new data fields to the schema for cloud billing to standard and detailed data exports to BigQuery.
These new fields are more information about the prices charged for your Google usage and consumption models. That is interesting. We will see what that means for what comes down the line for the CUDs.
Frank (27:14.798)
Could it be that if this is, and this is me hoping, okay, this is hope as a service. Is it, you think that Focus is asking for these kind of detail and they are starting to put it back as into standard one? But they require for cut style things. They do require for data.
Stephen Old (27:29.501)
don't know if focus has more detail than...
Stephen Old (27:38.112)
Yeah, they have got some standard bits they asked for, so it could absolutely well be that. I guess it could also be that there are some things in focus that come, that maybe, is that effective prices in there, which is only an AWS thing. And so it could be them meeting that, which is a useful metric.
We'll see. Yeah, yeah, I don't know. Yeah, I guess it could be that. It could be that they're expanding it into more areas, I suppose, as well. Who knows? Right.
Frank (28:07.084)
Let us know.
Frank (28:12.375)
Yep.
Frank (28:17.816)
Commitments. So we have a one year EC2 instance savings plans and are available for P5 and P5EN, which there was a new one, by the way, that was all that we announced earlier. It goes above this one, the B200 one.
But starting today EC2 one-year instance saving plans are now available for EC2 P5 and P5EN in all regions where these instances are available. And what that says is that now that they've released a new instance, they are incentivizing people to use and commit to the old version and stay on the old version probably because they don't have an infinite number of NVIDIA chips yet.
Stephen Old (29:01.485)
Yes, that makes some sense. Yeah.
Frank (29:03.054)
Because I think they had before they had only three years option there was only a three year option before and now there is a one year option and honestly committing for three years at this point in time for GPU is Yeah I think that's the right term Next one so
Stephen Old (29:18.119)
mad. Yeah. Yeah. Yeah. Indeed.
Frank (29:27.976)
It's Amazon RDS for Oracle now offers reserved instances for R7i and M7i instances.
Stephen Old (29:36.979)
see if I can find the pricing difference because RDS for Oracle like 80-90 % of the cost is the license.
Frank (29:44.59)
Yeah, with up to 46 % cost saving compared to on-demand prices. These instances are powered by... So 46 % reduction. So that might be...
Stephen Old (29:53.875)
That's massive. It's not off the instance though. Is that, oh, well, yeah, depends if it's.
Frank (30:00.174)
Well, compared to on-demand price, and this is Amazon RDS 4, You know, apply to both Multi-AZ and Sigone AZ. Customer can move freely between configuration. You can, yeah, you start having some flexibility in there. Amazon RDS 4 Oracle Reserved Instance also provides size flexibility for the Oracle database engine under the bring your own license licensing model. So it's the bring your own license, yeah.
Stephen Old (30:03.281)
I guess if it's license included, or finger.
Stephen Old (30:25.651)
Bring your own license. makes more sense. Yeah. Yeah. Because as soon as you're doing, yeah, as soon as you're doing your own. So, uh, uh, see if I can do an example. So two X large, uh, on demand. Well, this is interesting. So this is the license included version. Um, and it's saying the on demand rate is 1.175. The effective rate. Oh, there's a lot of asterisks underneath that.
Effective hourly price helps you calculate the amount of money reserved since it will save you over on demand pricing when you reserve. OK, suppose. OK, that's fine. Yeah. So we were happy with that. We're just talking about effective price. That was 37%, but that is for a small one. But it drops to 28 in the middle. Oh, no, that's just for changing from i's for the 6's. That drops to 28.
Frank (31:18.158)
Yeah, this is R7i and M7i, it's very specific.
Stephen Old (31:22.575)
R7IM7I, that is interesting. I can't even see them on the website pricing data. Maybe that's because they have to be under enterprise. No results found. No. Yeah, that's what I'm on right now. Yeah,
Frank (31:33.034)
No, I think it's Amazon RDS for Oracle. might be a...
well, they give a price. There is a price table there. will have a look. Let me see again. where's the Amazon Oracle pricing?
Stephen Old (31:53.291)
I'm just, I'm on the, the Amazon RDS for Oracle pricing page. might be in the wider pricing, but it has something else, but don't have the, unless I have to go further down, suppose, but I don't have the R7s on there at all.
Frank (31:59.235)
Yep.
Frank (32:03.788)
I don't know. Okay. License BYOL. And we depend the region, but yeah.
Stephen Old (32:09.299)
Hmm.
Stephen Old (32:14.183)
Yeah, I guess I could. I'm in Ohio, which is normally for hang on. I'm going to Virginia. Nothing's coming up.
Frank (32:20.118)
Okay, well... That's the announcement, guys!
Stephen Old (32:22.361)
All right, that's a nice one. But yeah, bring your own license makes sense that it'd be that cheap because it's just infrastructure, isn't it? So what do I have? Well, it's quite a simple one. The resource base code for M4, six terabytes is now available. I did have this one open, but I'll go down and find it. There it is. a generally available resource base, a commitment use discount are available for the M4 machine types that come with.
six terabytes of memory. I'm just loading the page that follows up because it's not loads on that. See if I can get a discount. You can attach reservations to the resource base commitments. No, it's just the standard guide and that it's been added there, but it's just for these big monsters that you can now have it. What's next?
Well, we haven't got anything in pricing. guess there have been different bits that could have gone in multiple ones, could have gone pricing, but have gone elsewhere. So next one is savings. In Dataproc, zero scale clusters are in preview. So I have more detail on this one. Dataproc. Let me see if I can actually get better. Basically, it can be scaled down to zero.
Frank (33:40.086)
That was the idea, yeah, interesting for that.
Stephen Old (33:41.203)
Yeah, exactly. And in fact, even the article behind that, it just gives you the CLI command. But yeah, you can scale them down to zero now, which obviously saves you significantly, which makes it more like the serverless services that you have. It has to be working versions 2.253 and later supports only secondary workers, not the primary workers. Makes sense, right? So you'll still have to have something up. But for your secondary workers, you can have them scaled to zero.
Frank (33:47.95)
Okay.
Frank (34:16.76)
Phenapsy, Phenapsy. And I thought about this one, it's mostly because we all look for information from the cloud vendors on a regular basis. So announcing intelligence search for reports and reports private. And so this is where AWS provides some information. So they have the knowledge base. And so you have now more efficient way and intuitive way to access AWS knowledge across multiple sources. This new capability transform how builders, which is people not AI builders,
Stephen Old (34:25.329)
Yeah.
Frank (34:46.704)
find information, provide synthesized answers for AWS resources in one place. So go have a look, go to AWS reposts and you can start looking at some interesting information. It a little like Stack Overflow, but yeah.
That's one. Next one was, I've got lost. Yes, so we have Amazon Q Developer now helps customers optimize AWS costs. That could have gone into the AI bit too, but with this launch, customer can ask Amazon Q Developer questions like, how can I lower my AWS bill to receive a prioritized recommendation based on potential saving, implementation effort and performance risk? Customer can also ask detailed follow-up questions like,
Stephen Old (35:25.704)
Hmm.
Frank (35:34.048)
like how was this recommendation calculated or how was this instance identified as idle and receive narrative explanations. So what is very interesting, it seemed that they've created, and I'm having some fun with that, with some sort of an MCP, so access to tools so that Amazon Q can now access.
the all the billing data, the calculator, the recommendations, the optimizer and provide you with some information. So that's quite cool. I recommend try it. I need to try it out. My bill is so small that it would just tell me you have nothing to do or delete all your data. That's probably going to be the only recommendation for me. Next.
Stephen Old (36:16.347)
Nice.
Frank (36:17.486)
Do we have another one? that next one still minor? Yes. So, cost optimization hub supports recommendation for Amazon Aurora. So, cost optimization hub now supports instance and cluster storage recommendation from Amazon Aurora databases. Again, something that could have gone so many places. But these recommendations help you identify idle database instances and choose the optimal DB instance class and storage configuration of your Aurora databases.
Stephen Old (36:19.952)
Yeah, yep.
Frank (36:46.232)
So you can now filter, consolidate, prioritize or optimization recommendation across your organization members account through a single dashboard. Yes, another one. And it shows you estimated savings and et cetera. And yeah, so that's another interesting bit to add to your capabilities, Phoenix practitioners out there.
Stephen Old (37:08.145)
Indeed. We've got one of those funny ones, Frank, that has disappeared since the time of research.
Frank (37:14.242)
Nah, which one? 1.2? but I've seen it. I've seen it this morning. No, what I mean is I think I've seen this morning that they were, that Microsoft was supporting focus 1.2.
Stephen Old (37:16.315)
And it's quite a big one. The next one. Yeah. You click, see if you're saying from.
Stephen Old (37:25.467)
You've seen it this morning. It was only there last week. So maybe they've just changed the article. Yeah.
Stephen Old (37:37.103)
Yeah, exactly. Yeah. So basically listeners, had Microsoft Cost Management now supports exporting cost and usage data into the FinOps Open Cost and Usage Specification 1.2 Schema. But the article that has that has gone 404. We've had this in the past when things have disappeared on us. But we're just checking to see if it has reappeared anywhere else. Now something else has pulled up that it's been announced.
Frank (37:59.373)
Yeah.
Frank (38:06.19)
Microsoft is going through quite a lot of changes at the moment and FinOps has been impacted too.
Stephen Old (38:11.046)
It is.
Stephen Old (38:14.991)
It has, yes. people that we know and love are no longer there. Yeah, I can't see anything on the Google anymore apart from the fact that you can see the other news outlets. Hang on, we're not a news outlet. That sounded very arrogant. Other places have obviously seen that in their reshare.
Frank (38:37.71)
because we are totally a news outlet. It's like we were asked, do you want to be press? said, look, have you seen? Look at us.
Stephen Old (38:43.151)
Yeah. That requires some level of professionalism. Right. What are we up to? So yeah, maybe it's been pulled. But you said you've never seen this one. I can't see it Anyway, so the next one then is BigQuery Advanced Runtime Preview. The new advanced runtime can improve query execution time and slot usage, helping optimize costs.
I've dug into the stuff while you were doing the last one. I can't see anything that particularly screams that to me, but it does make logical sense. That's one of the ones from the tool we using. Another one we saw was that you can apply maximum instances at service level versus revision level now. And when applying maximum instances, the settings go into effect as follows. Service level is immediate, and revision level is upon deployment of revision. So.
Tagged revisions and service level maximum instances are started, but only count towards the maximum service level if they're part of the, sorry, this is tagged revisions, part of the, of a traffic split. So this allows you to have more rapid impact, allows you to potentially reduce cost. And then the final one, no, that's my last one. Yes, on that one. And I have got one at the bottom.
Frank (40:07.342)
Yeah, and I, here we go. Yeah, I have another one. I've added my tools, the ideas I've been publishing on. If you look for me, so Franck-Contrepoix on GitHub, you'll find I'm trying to build a current anonymizer on one side. That's an open source project. And also some terminal-based little tools.
Stephen Old (40:08.371)
which Missy Lansing probably could have gone into. Have you got another one?
Stephen Old (40:13.962)
yeah.
Frank (40:29.312)
so that you can do some of Finobs without having to have a full tool set. You can just take the pieces you need and slowly I'm going to grow the pieces and you can pipe the output of one into the other the input of the other so you can change them in a way any Linux person will know how to.
Stephen Old (40:47.311)
Nice, nice. So not me. Even though I'm using more more Linux, I just, I don't know how I do it. It's a lot of guesswork, a lot of Googling. Right. You'll go, I thought I'd started one and got rid of it when it just said my tools. I thought it was my error. The last ones we've got Azure, Misc and Silly. But this one is actually that you can maximize your ROI for Azure Open AI. And let me just see what it says. I've now forgotten. Cause it was at the beginning. We've got it.
I think it's an article, isn't it? Yeah. So on June the 18th, they did a five minute article, Maximize Your ROI for Open AI. And it talks about when you're building with AI, every decision counts, especially when comes to cost and whether you're getting started or scaling to enterprise grade. The last thing you want is unpredictable pricing or rigid infrastructure slowing you down and it's designed to help you do that.
I mean, I've been doing more AI stuff recently, trying to really manage the cost and the carbon impact of that. And I think, you know, it is possible, but you really have to have cost and carbon as a non-functional requirement in your design because otherwise you can go absolutely mad for it. And, you know, and just start with the least and the smallest and build it from there. I think a lot of people are just going, oh, I might need that. I'm whacking on full, you know.
Frank (41:54.837)
Yep.
Stephen Old (42:07.778)
You've really got to think with a constraint in mind. But yeah, pretty good article. It talks about the different ways of using it. So where to use Foundry, where to use the large batch, provisioned through puts, that kind of stuff.
Frank (42:19.754)
And yep.
I want to highlight that. So I found the page which was focus.philips.org getting started. Get-started and it shows that Microsoft Azure 1.2, Confirmance Gap Review. But if you click on the link, so that's all good. But if I try to go to the Microsoft link, it shows 1.0. It tells you, now you can also export in focus 1.0. So I don't know if they've removed the 1.2, but the Phinaps Foundation report
Stephen Old (42:31.89)
Yeah.
Stephen Old (42:38.895)
Also goes... Yeah, it's gone. Yeah.
Stephen Old (42:48.049)
I think it broke back, yeah.
Frank (42:51.856)
Yeah.
Stephen Old (42:54.023)
said that, yeah, I mean, they'll have seen it because it was last month and it was there last week still. So people will have built on top like their sites on top of it. And then it is obviously disappeared.
Frank (43:04.59)
Yeah, they've changed it on the 2nd of June. These documents summarize focus 1.0. So they seem to have done the 1.2 and removed it.
Stephen Old (43:16.435)
feels like it, but maybe something else is going on. Maybe it's just a website thing. don't Unusual. Well, no, I mean, we've both looked. That's been a lot longer than we anticipated. All sorts of different bits and a bit of research going on in the fly. But thank you everyone, if you have managed to stick into this part. If you think we're being silly and 1.2 is there for Azure again, then maybe I should log into my Azure portal and have a look. No, no, we haven't got time. We've talked far too long already.
Frank (43:18.528)
Maybe. Yeah, it's just maybe me not looking at the right place.
Frank (43:40.81)
Let us know. Yeah. Yes.
Stephen Old (43:46.065)
But thank you, Frank. Thank you, listeners. And if you want to come and join us on the show, come in for an interview episode. We are always looking for people. Just come with a topic. Perfect. Thanks very much. Take care. Bye bye.
Frank (44:00.344)
Yep. Thank you everyone. Bye bye.