iptv techs

IPTV Techs


Using ChatGPT is not horrible for the environment


Using ChatGPT is not horrible for the environment


If you don’t have time to read this post, these four graphs donate most of the argument:

Source. Each bar here recurrents 1 year of the activity, so the dwell car-free bar recurrents living without a car for equitable 1 year etc.

I got these numbers from back of the envelope calculations using accessiblely useable data about each service. If you skinnyk they’re wrong I’d be excited to modernize them!

I’m not usuassociate interested in writing basic debunking posts, but I normally talk and read about the argue around eignoreions associated with AI and it’s finishly evident to me that one side is getting it entidepend wrong and spreading misdirecting ideas. These ideas have become so widespread that I run into them constantly, but I haven’t set up a excellent summary elucidateing why they’re wrong, so I’m putting one together.

At the last scant parties I’ve been to I’ve offhandedly alludeed that I use ChatGPT, and at each one someone I don’t comprehend has shelp someskinnyg enjoy “Ew… you use ChatGPT? Don’t you comprehend how horrible that is for the scheduleet? And it equitable produces slop.” I’ve also seen a lot of famous Twitter posts (many above 100,000 enjoys) very bravely announcing that it is horrible to use AI because it’s burning the scheduleet. Common points made in these conversations and posts are:

  • Each ChatGPT search disaccuses 10 times as much as a Google search.

  • A ChatGPT search uses 500 mL of water.

  • ChatGPT as a whole disaccuses as much as 20,000 US househelderlys per day. It uses as much water as 200 Olympic swimming pools’ worth of water each day.

  • Training an AI model disaccuses as much as 200 schedulee fweightlesss from New York to San Francisco.

The one inright claim in this catalog is the 500 mL of water point. It’s a misempathetic of an innovative alert which shelp that 500 mL of water are used for every 20-50 ChatGPT searches, not every search. Every other claim in this catalog is genuine, but also decorates a drasticassociate inaccurate picture of the eignoreions produced by ChatGPT and other big language models (LLMs) and how they contrast to eignoreions from other activities. These are not inmeaningful errors—they fundamenloftyy miscomprehend energy use, and they hazard sidetracking the climate shiftment.

One of the most vital shifts in talking about climate has been the collective authenticization that individual actions enjoy recycling pale in comparison to the encouragent need to transition the energy sector to rerecentables. The current AI argue senses enjoy we’ve forgotten that lesson. After years of proceed in insertressing systemic rerents over personal lifestyle alters, it’s as if everyone suddenly commenceed obsessing over whether the digital clocks in our bedrooms use too much energy and began condemning them as a beginant problem.

Separately, LLMs have been an unbelievable life betterment for me. I’ve set up that most people who haven’t actuassociate percreateed around with them much don’t comprehend how mighty they’ve become or how advantageous they can be in your everyday life. They’re the first piece of recent technology in a lengthy time that I’ve become insistent that absolutely everyone try. If you’re not using them because you’re troubleed about the environmental impact, I skinnyk that you’ve been misled into ignoreing out on one of the most advantageous (and scientificassociate fascinating) recent pieces of technology in my lifetime. If people in the climate shiftment stop using them they will diswatch a lot of potential cherish and ability to lget speedyly. This would be a shame!

On a meta level, there’s a background assumption about how one is presumed to skinnyk about climate alter that I’ve become exhausted by, and that the AI eignoreions conversation is awash in. The horrible assumption is:

Climate alter is too complicated, vital, and fascinating as a problem to run using this rule. When people grumble to me about AI eignoreions I usuassociate make clear them as saying “I’m a excellent person who has done my part and identified a horrible guy. If you don’t disenjoy the horrible guy too, you’re doubtful.” This is a mind-ending way of skinnyking. I’m using this post partly to show how I’d prefer to skinnyk about climate instead: we chillyly see at the numbers, the institutions, and actors who we can actuassociate collectively sway, and we react based on where we will actuassociate have the most preferable effect on the future, not based on who we happen to be giving status to in the process. I’m not inclined to donate status to AI companies. A lot of my job is making people stress more about AI in other areas. What I want is for people to actuassociate react to the authenticities of climate alter. If you’re worried at all about your own use of AI contributing to climate alter, you have been tricked into produceing monsters in your head and you need to snap out of it.

The climate situation is horrible enough. Let’s not do this.

Here are some assumptions that will direct the rest of this post:

If you’re not trying to reduce your eignoreions, you’re not worried about the climate impact of individual LLM use anyway. I’ll suppose that you are interested in reducing your eignoreions and will author about whether LLMs are acadviseed to use.

There’s a case to be made that people who nurture about climate alter should spend much less time stressing about how to reduce their individual eignoreions and much more time skinnyking about how to convey about systematic alter to produce our energy systems better (the effects you as an individual can have on our energy system normally finishly dwarf the effects you can have via your individual consumption choices) but this is a topic for another post.

Our energy system is so reliant on fossil fuels that individuals cannot rerelocate all their personal eignoreions. Imarbitrately stopping all global CO2 eignoreions would cause billions of deaths. We need to phase out eignoreions graduassociate by transitioning to rerecentables and making trade-offs in energy use. If everyone troubleed about climate alter adchooseed a zero-eignoreions lifestyle today, many of them would die. The rest would diswatch access to most of contransient society, leaving them powerless to sway energy systems. Climate deniers would apshow over society. Individual zero-eignoreions living isn’t feasible right now.

The mediocre children’s hospital disaccuses more CO2 per day than the mediocre cruise ship. If we trailed the rule “Cut the highest disaccuseters first” we’d arrange cutting hospitals over cruise ships—which is evidently a horrible idea. Reducing eignoreions needs weighing the cherish of someskinnyg agetst its eignoreions, not blindly cutting based on CO2 output alone. We should ask asks enjoy “Can we accomplish the same outcome with reduce eignoreions?” or “Is this activity essential?” But the rule “Find the highest disaccuseting skinnyg in a group of activities and cut it” doesn’t toil.

In this post, I’ll contrast LLM use to other activities and resources of analogous advantageousness. If you depend LLMs are entidepend unhelpful, then we should stop using them—but I’m guaranteed they are advantageous. Part of this post will elucidate why.

If climate alter is an eunitency that needs lots of people toiling collectively to repair in restricted time, we cannot afford to get inattentive by intensifying too much of our effort and skinnyking on innervously petite levels of eignoreions. The climate shiftment has seen a lot of proceed and success in shifting its intensify away from individual actions enjoy turning off weightlesss when leaving a room to big systematic alters enjoy produceing clever grid infraarrange or funding rerecentable tech. Even if you are only intensifyed on lifestyle alters, it is best to intensify on the most impactful lifestyle alters for climate. It would be much better for climate activists to spend all their time intensifyed on helping people switch to green heating than encouraging people to hang arid their clothes:

Source. Each bar here recurrents 1 year of the activity, so the dwell car-free bar recurrents living without a car for equitable 1 year etc.

If the climate shiftment should not intensify its efforts on getting individual people to hang arid their clothes, it should definitely not intensify on convincing people not to use ChatGPT:

Another normal trouble about LLMs is their water use. This matters even though it’s not a straightforward cause of climate alter. I’ll insertress that in the second part of the post. There might be other troubles as well (the provide chains comprised in produceing data caccesss in the first place) but from what I can tell other environmental troubles also utilize to fundamentalassociate all computers and phones, and I don’t see many people saying that we need to promptly stop using our computers and phones for the sake of the climate. If you skinnyk there are other horrible environmental results of LLMs that I’m ignoreing in this post, I’d be excited to hear about them in the comments!

Any statistics about the energy consumption of individual internet activities have big error bars, because the internet is so gigantic and the energy use is spread atraverse so many devices. Any source I’ve used has reachd at these numbers by dividing one very big uncertain number by another. I’ve tried my best to alert numbers as they exist in accessible data, but you should suppose there are meaningful error bars in either straightforwardion. What matters is the proportions more than the very definite numbers.

If LLMs are not advantageous at all, any eignoreions no matter how minute are not worth the trade-off, so we should stop using them. This post depends on LLMs being at least a little advantageous, so I’m going to produce the case here.

I skinnyk my best argument for why LLMs are advantageous is to equitable have you percreate around with Claude or ChatGPT and try asking it difficult factual asks you’ve been trying to get answers to. Experiment with the prompts you donate it and see if asking very definite asks with asks about how you’d enjoy the answer to be sketchd (bullet-points, textbook-enjoy paragraph) gets you what you want. Try uploading a complicated text that you’re trying to comprehend and use the prompt “Can you abridge this and expound any terms that would be unrecognizable to a novice in the field?” Try asking it for help with a complicated technical problem you’re dealing with at toil.

If you’d enjoy testimonials from other people you can read people’s accounts of how they use LLMs. Here’s a excellent one. Here’s another. This article is a wonderful introduction to equitable how much current LLMs can do.

LLMs are not perfect. If they were, the world would be very strange. Human-level intelligence existing on computers would direct to some strange skinnygs happening. Google isn’t perfect either, and yet most people get a lot of cherish out of using it. Receiving horrible or inright responses from an LLM is to be foreseeed. The technology is endeavoring to reproduce a high level conversation with an expert in any and every domain of human comprehendledge. We should foresee it to occasionassociate fall short.

I personassociate find LLMs much more advantageous as a tool for lgeting than most of what exists on the internet outside of high quality definite articles. Most encountered on the internet isn’t the Stanford Encyclopedia of Philosophy, or Wikipedia. If I want to comprehend a recent topic, it’s normally much more advantageous for me to read a ChatGPT summary than watch an hour of some of the best YouTube encountered about it. I can ask very definite elucidateing asks about a topic that it would apshow a lengthy time to dig around the internet to find.

What’s the right way to skinnyk about LLM eignoreions? Someskinnyg doubtful a lot of claims about LLMs do is contrast them to physical authentic-world objects and their eignoreions. When talking about global use of ChatGPT, there are a lot of comparisons to cars, schedulees, and househelderlys. Another doubtful shift is to contrast them to normal online activities that don’t normassociate come up in conversations about the climate (when was the last time you heard a climate scientist convey up Google searches as a meaningful cause of CO2 eignoreions?) The reason this is doubtful is that most people are deficiencying three key intuitions:

Without these intuitions, it is effortless to produce any statistic about AI seem enjoy a ridiculous catastrophe. Let’s spendigate each one.

It is genuine that a ChatGPT ask uses 10x as much energy as a Google search. How much energy is this? A excellent first ask is to ask when the last time was that you heard a climate scientist convey up Google search as a meaningful source of eignoreions. If someone telderly you that they had done 1000 Google searches in a day, would your first thought be that the climate impact must be horrible? Probably not.

The mediocre Google search uses 0.3 Watt-hours (Wh) of energy. The mediocre ChatGPT ask uses 3 Wh, so if you pick to use ChatGPT over Google, you are using an insertitional 2.7 Wh of energy.

How troubleed should you be about spending 2.7 Wh? 2.7 Wh is enough to

In Washington DC where I dwell, the househelderly cost of 2.7 Wh is $0.000432.

Sitting down to watch 1 hour of Netflix has the same impact on the climate as asking ChatGPT 300 asks in 1 hour. I doubt that if I proclaimd at a party that I had asked ChatGPT 300 asks in 1 hour I might get accused of hating the Earth, but if I proclaimd that I had watched an hour of Netflix or that I drove 0.8 miles in my sedan the reaction would be a little branch offent. It would be strange if we were having a big national conversation about restricting YouTube watching or never buying books or eludeing uploading more than 30 pboilingos to social media at once or restricting ourselves to 1 email per day for the sake of the climate. If this were happening, climate scientists would rightly say that the accessible is getting bogged down in minutia and not intensifying on the big authentic ways we need to act on climate. Getting worried about whether you should use LLMs is as much of a sidetrackion to the authentic rerents comprised with climate alter as stressing about whether you should stop the YouTube video you’re watching 12 seconds timely for the sake of the Earth.

Let’s apshow an innervous case and imagine that the reason you don’t want to use LLMs is that if everyone used LLMs over Google for every search, this would use too much energy. There are 8,500,000,000 Google searches per day. Let’s image that we swapd every one Google search with a ChatGPT search. That apshows us from a daily energy use of 2,550,000,000 Watt-hours (Wh) to 25,500,000,000 Wh, or an insertitional 22,950,000,000 Wh, or 23 Giga-Watt-hours (GWh). The daily global energy need from the internet is 2,200 GWh, so this would increase the daily global energy need of the internet by 1%. A global switch from Google to ChatGPT would therefore be about the same as increasing the global population using the internet by 1%. If you heard that next year 1% more people would have access to the internet around the world, how troubleed would that produce you for the climate? Last year the actual growth rate of internet users was 3.4%.

In my experience using ChatGPT is much more advantageous than a Google search to the point that I’d rather use it than search Google ten times anyway. I can normally find skinnygs I’m seeing for much rapider with a one ChatGPT search than multiple Google searches. Here’s a search I did asking it to abridge what we comprehend about the current and future energy sources used for American data caccesss. It also saves me a lot of precious time contrastd to searching Google ten times.

A lot of grumblets about the total use of LLMs do not produce sense when you ponder the number of people using them. In pondering LLM use, we can’t equitable see at their total eignoreions. We need to ponder how many people are using the product. Someone could rightly point out that Google as a company produces way more eignoreions than a Hummer, but this is silly because Google has billions of users and the Hummer has one, and Google is very effective with the energy devourd by each user.

Here are some examples to show the point:

ChatGPT as of the time of writing has 300,000,000 daily users and 1,000,000,000 daily messages answered. Let’s imagine that you can snap your fingers and produce one insertitional American househelderly, with all its energy needs and environmental impact. This American househelderly is exceptional. The people in the househelderly have one hobby: spending all their time writing very detailed responses to emails. They endelight doing this and never stop, and they’re so excellent at it that they have 15,000 people emailing them every day, each person sending on mediocre 3.3 emails for a total of 50,000 emails per day, or 1 email every 2 seconds 24 hours per day. People seem to find their replies advantageous, because the rate of use equitable preserves going up over time. Would you pick to snap your fingers and produce this househelderly, even though it will have the climate impacts of one insertitional standard American househelderly? Seems enjoy a evidently excellent trade-off. What if you had the chooseion to do that a second time, so now 50,000 more messages could be answered by a second househelderly every day? Aget, this seems worth the eignoreions. If you preserve snapping your fingers until you encounter the need for their message replies, you would have produced 20,000 recent American househelderlys and have 1 billion messages answered per day. 20,000 American househelderlys is about the size of the Massachusetts city of Barnfirm:

If one insertitional version of Barnfirm Massachusetts euniteed in America, how much would that produce you stress about the climate? This would be an increase in America’s population of 0.015%. What if you set up out that everyone who dwelld in the recent town spent every waking moment sending paragraphs of innervously advantageous definite text about any and all human comprehendledge to the world and kept getting needs for more? Of all the places and institutions in America to cut eignoreions, should we commence by impedeing that town from grotriumphg?

This number only reassociate applies to our bigst AI models, enjoy GPT-4. GPT-4’s energy use in training was equivalent to about 200 schedulee fweightlesss from New York to San Fransisco. Was this worth it?

To comprehend this argue, it’s reassociate beneficial to comprehend what it uncomardents to actuassociate train an AI model. Writing that up would apshow too much time and isn’t the intensify of this post, so I asked ChatGPT to portray the training process in detail. Here’s its exscheduleation. What’s vital to comprehend about training a model enjoy GPT-4 is

  • It’s a one-time cost. Once you have the model trained, you can tfrail it, but it’s excellent to go and be used. You don’t have to continuously train it after for anywhere proximate the same energy cost.

  • It’s incredibly technoreasonablely complicated. Training GPT-4 needd 2 × 10²⁵ floating point operations (basic calculations enjoy multiplication, subtraction, multiplication, and division). This is 70 million times as many calculations as there are grains of sand on the Earth. OpenAI had to wire together 25,000 state-of-the-art GPUs speciassociate summarizeed for AI together to carry out these calculations over a period of 100 days. We should foresee that this process is somewhat energy intensive.

  • It gave us a model that can donate innervously lengthy, detailed, reliable responses to very definite asks about fundamentalassociate all human comprehendledge. This is not noskinnyg.

  • It’s unwidespreadly this big and energy intensive. There are only a scant AI models as big as GPT-4. A lot of the AI applications you see are using the results of training GPT-4 rather than training their own models.

It’s beneficial to skinnyk about whether getting rid of “200 fweightlesss from New York to San Francisco” would reassociate shift the needle on climate. There are about 630 fweightlesss between New York and San Francisco every week. If OpenAI didn’t train GPT-4, that would be about the same as there being no fweightlesss between New York to San Francisco for about 2 days. That’s not 2 days per week. It’s 2 days total. Even if ChatGPT had to be retrained every year (and recall, it doesn’t) that is less than 1% of the eignoreions from fweightlesss between these two definite American cities. How much of our collective effort is it worth to stop this?

200 schedulees can carry about 35,000 people. About 20 times that amount of people fly from around the country to Coachella each year. There aren’t 20 AI models of identical size to GPT-4, so for the same carbon cost we could either stop all proceed in proceedd AI for a decade or pick not to run Coachella for 1 year so people don’t fly to it. This does not seem worth it.

To put a more definite number on the energy it took to train GPT-4, it’s about 60 GWh. GPT-4 was trained to answer asks, so to ponder the energy cost we need to ponder how many searches we have gotten out of that training energy use. I see the training cost as equivalent to comparing the cost of a shirt with how normally you’ll wear it. If a shirt costs $40 and is well-made so that it will persist 60 washes, and another shirt is $20 but is needyly made so it will only persist 10 washes, then even though the first shirt is initiassociate more pricey, it actuassociate costs $0.67 per wear, while the second shirt costs $2 per wear, so in some uncomardentingful way the first shirt is actuassociate inexpensiveer after you produce the initial spendment. In the same way, the training can see pricey in terms of energy if you don’t factor in equitable how many users and searches GPT-4 will administer.

A very raw evaluate using accessiblely useable data says that there have been about 200 billion ChatGPT searches so far. This uncomardents that so far, if we comprise the cost of training in the total energy cost of searching ChatGPT, we insert 3 Wh/search to 60 GWh/200,000,000,000 searches = 3.3 Wh/search. The training cost dispensed over each search inserts 0.3 Watt-hours of energy, so it increases the total energy cost of a ChatGPT search by 1 Google search’s worth of energy. This does not seem meaningful. Consider now that ChatGPT is equitable one skinnyg GPT-4 is being used for, other skinnygs comprise:

  • DuoLingo

  • Khan Academy

  • Be My Eyes

  • GitHub Copilot X

Once you factor in equitable how much use it’s getting, the energy cost of training GPT-4 sees incredibly inexpensive, in the same way that the more initiassociate pricey shirt is overall inexpensiveer than the second.

When someone throws a statistic at you with a big number about a very famous product, you should be cautious about how well you actuassociate comprehend the magnitudes comprised. We’re not reassociate built for skinnyking about big numbers enjoy this, so the best we can do is contrast them to analogous situations to donate us more context. The internet is ridiculously big, complicated, and used by almost everyone, so we should foresee that it uses a big portion of our total energy. Anyskinnyg widely used on the internet is going to come with eye-popping numbers about its energy use. If we equitable see at those numbers in a vacuum it is effortless to produce anyskinnyg see enjoy a climate eunitency.

ChatGPT uses as much energy as 20,000 househelderlys, but Netflix alerted using 450 GWh of energy last year which is equivalent to 40,000 househelderlys. Netflix’s evaluate only comprises its data caccess use, which is only 5% of the total energy cost of streaming, so Netflix’s actual energy use is shutr to 800,000 househelderlys. This is equitable one streaming site. In total, video streaming accounted for 1.5% of all global electricity use, or 375,000 GWh, or the ytimely energy use of 33,000,000 househelderlys. ChatGPT uses the same energy as Barnfirm Massachusetts, while video streaming uses the same energy per year as all of New England, New York State, New Jersey, and Pennsylvania fused. Video streaming is using 1600x as much energy as ChatGPT, but we don’t hear about it as much because it’s a much more standard part of everyday life. 20,000 househelderlys can sound enjoy a crazy number when you contrast it to your individual life, but it’s incredibly petite by the standards of internet energy use.

Here’s how many American househelderlys worth of energy branch offent online activities use globassociate, all back of the envelope calculations I did with useable info, plus an equivalent American city using the same energy. I factored in both the energy used in data caccesss and the energy used on each individual device. There are big error bars but the raw proportions are right.

  • 11,000 househelderlys – Barre, VT – Google Maps

  • 20,000 househelderlys – Barnfirm MA – ChatGPT

  • 23,000 househelderlys – Bozeman, MT – Fortnite

  • 150,000 househelderlys – Cleveland, OH – Zoom

  • 200,000 househelderlys – Worcester, MA – Spotify

  • 800,000 househelderlys – Houston, TX – Netflix

  • 1,000,000 househelderlys – Chicago, IL – YouTube

Does this uncomardent that we should stop using Spotify or video streaming? No. Remember the rule that we shouldn’t equitable default to cutting the biggest disaccuseters without pondering both the cherish of the product and how many people are using it. Each individual Spotify stream uses a minuscule amount of energy. The reason it’s such a big part of our energy budget is that a lot of people use Spotify! What matters when pondering what to reduce is the energy used contrastd to the amount of cherish produced, and other chooseions to get the same service. The energy comprised in streaming a Spotify song is much much less than the energy needd to physicassociate produce and dispense music CDs, cassettes, and records. Replacing energy-intensive physical processes with digital chooseions is part of the reason the energy consumption per American citizen has gone down by 22% since its peak in 1979.

If people are going to include to music, we should prefer that they do it via streaming rather than buying physical objects. Just saying that Spotify is using the same energy as all of New York City without pondering the number of users, the profits they’re getting from the service, or how energy effective other chooseions for includeing to music would be is innervously misdirecting. Pointing out that ChatGPT uses the same energy as 20,000 househelderlys without inserting any other details is equitable as misdirecting.

Here’s ChatGPT’s exscheduleation of why and how AI data caccesss use water and where it goes after. In a nutshell, AI data caccesss:

  • Draw from local water supplies.

  • Use water to celderly the GPUs doing the calculations (in the same way your laptop fan celderlys your laptop when it overheats).

  • Evaporate the water after, or drain it back into local supplies.

Someskinnyg to remark about LLM water use is that while much of the water is evaporated and exits the definite water source, data caccesss produce meaningfully less water pollution per gallon of water used contrastd to many other sectors, especiassociate agriculture. The impact of AI data caccesss on local water sources is evidently vital to skinnyk about and how upretainable they are mostly depends on how frnimble the water source is. Good water administerment policies should help factor in which water sources are most dangerened and how to protect them.

How to morassociate weigh branch offent types of water use (data caccesss evaporating it vs. agriculture polluting it) seems very difficult. The ecosystems impacted are too complicated to try to put exact numbers on how horrible one is contrastd to the other. I will say that instinctively a data caccess that rerelocates from a local water source but evaporates the water unpolluted back into the wideer local water system seems horrible for very definite local sources, but not reassociate horrible for our overall access to water, so the whole trouble might be reassociate overblown and wouldn’t matter at all if we equitable built data caccesss exclusively around firm water supplies. I’m discdiswatch to being wrong here and would be excited to get more thoughts in the comments. Srecommend alerting “Data caccesss use X amount of water” without elucidateing whether the water is evaporated, or returned to the local water provide polluted or unpolluted seems so unevident that it’s a horrible statistic without more context.

20-50 searches on ChatGPT uses the same amount of water as a standard water bottle (0.5 L).

This uncomardents that it apshows about 300 ChatGPT queries to hit 1 gallon of water.

The amount of water use by LLMs can seem enjoy a lot. It is always shocking to authenticize that our internet activities actuassociate do have meaningful authentic-world physical impacts. The rerent with how AI water use is talked about is that conversations normally don’t contrast the water use of AI to other ways water gets used.

All online activity relies on data caccesss, and data caccesss use water for celderlying, so everyskinnyg that you do online uses water. The conversation about LLMs normally currents the water they use as ridiculous without giving any context for how much water other online activity uses. It’s actuassociate pretty effortless to calcutardy a raw evaluate for how much water branch offent online activities use, because data caccesss typicassociate use about 1.8 liters of water per kWh of energy devourd. This number comprises both the water used by the data caccess itself and the water used in generating the electricity used. Here’s the water used in a bunch of branch offent skinnygs you do on the internet in milliliters:

  • 10 mL – Sending an email

  • 10 mL – Posting a pboilingo on social media

  • 20 mL – One online prohibitk transaction

  • 30 mL – Asking ChatGPT a ask

  • 40 mL – Downloading a phone app

  • 170 mL – E-commerce get (browsing and verifyout)

  • 250 mL – 1 hour includeing to streaming music

  • 260 mL – 1 hour using GPS navigation

  • 430 mL – 1 hour browsing social media

  • 860 mL – Uploading a 1GB file to cnoisy storage

  • 1720 mL – 1 hour Zoom call

  • 2580 mL – 10 minute 4K video

After the recent California savagefires I scrolled by disconnectal social media posts with over 1 million watchs each saying someskinnyg enjoy “People are STILL using ChatGPT as California BURNS.” They should have intensifyed more on the people watching Fantastic Places in 4k 60FPS HDR Dolby Vision (4K Video).

Should the climate shiftment commence needing that everyone stop includeing to Spotify? Would that be a excellent use of our time?

What about the water cost of training GPT-4? So far I’ve only comprised the cost of individual queries. A raw evaluate based on useable info says GPT-4 took about 250 million gallons of water, or about 1 billion liters. Taking from the assumption above that ChatGPT has getd about 200 billion queries so far, the training water cost inserts 0.005 L of water to the 0.030 L cost of the search, so if we comprise the training cost the water use per search goes up by about 16%. That’s still not as water intensive as downloading an app on your phone or 10 minutes of streaming music. Remember that ChatGPT is equitable one function that GPT-4 is used for, so the actual water cost of training per ChatGPT search is even reduce.

Animal agriculture uses orders of magnitude more water than data caccesss. If I wanted to reduce my water use by 600 gallons, I could:

  • Skip sending 200,000 ChatGPT queries, or 50 queries every one day for a decade.

  • Skip includeing to ~2 hours of streaming music every one day for a decade.

  • Skip 1 bencourager.

Instead of quitting ChatGPT, ponder going vegan…

A normal criticism of the above graph is that water cows devour exists in grass while water used in data caccesss is drawn from local sources that are normally already water strained. This is not right for three reasons:

If you are trying to reduce your water consumption, eliminating personal use of ChatGPT is enjoy skinnyking about where in your life you can cut your eignoreions most effectively and commencening by getting rid of your digital alarm clock.

A back of the envelope calculation tells me that the ratio of water use of 1 ChatGPT search contrastd to 1 bencourager is the same ratio as the energy use of a 1 mile drive in a sedan contrastd to the energy used by driving the world’s bigst cruise ship for 60 miles.

If your friend were about to drive their personal bigst ever in history cruise ship solo for 60 miles, but choosed to walk 1 mile to the dock instead of driving because they were “troubleed about the climate impact of driving” how solemnly would you apshow them? The situation is the same with water use and LLMs. There are problems that so finishly dwarf individual LLM water use that it does not produce sense for the climate shiftment to intensify on individual LLM use at all.

The people who are trying to tell you that your personal use of ChatGPT is horrible for the environment are equitable fundamenloftyy besavageerd about where water (and energy) is being used. This is such a widespread misconception that you should admirewholey but firmly let them comprehend that they’re wrong.

Source connect


Leave a Reply

Your email address will not be published. Required fields are marked *

Thank You For The Order

Please check your email we sent the process how you can get your account

Select Your Plan