Do you need a little inspiration boost? Well, then our new batch of desktop wallpapers might be for you. The wallpapers are designed with love by the community for the community and can be downloaded for free! Enjoy!
The Core Model is a practical methodology that flips traditional digital development on its head. Instead of starting with solutions or structure, we begin with a hypothesis about what users need and follow a simple framework that brings diverse teams together to create more effective digital experiences. By asking six good questions in the right order, teams align around user tasks and business objectives, creating clarity that transcends organizational boundaries.
Web Components are more than just Custom Elements. Shadow DOM, HTML Templates, and Custom Elements each play a role. In this article, Russell Beswick demonstrates how Shadow DOM fits into the broader picture, explaining why it matters, when to use it, and how to apply it effectively.
Today, roughly 10% of people are left-handed. Yet most products — digital and physical — aren’t designed with it in mind. Let’s change that. More design patterns in Smart Interface Design Patterns, a **friendly video course on UX** and design patterns by Vitaly.
Event listeners are essential for interactivity in JavaScript, but they can quietly cause memory leaks if not removed properly. And what if your event listener needs parameters? That’s where things get interesting. Amejimaobari Ollornwi shares which JavaScript features make handling parameters with event handlers both possible and well-supported.
Is there a way to build demos that do not break when the services they rely on fail? How can we ensure educational demos stay available for as long as possible? Keeping Article Demos Alive When Third-Party APIs Die originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
I went on to figure out how make masonry work today with other browsers. I'm happy to report I've found a way — and, bonus! — that support can be provided with only 66 lines of JavaScript. Making a Masonry Layout That Works Today originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
Do we invent or discover CSS tricks? Lee Meyer discusses how creative limitations, recursive thinking, and unexpected combinations lead to his most interesting ideas. How to Discover a CSS Trick originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
Brad Frost introduced the “Atomic Design” concept wayyyy back in 2013. He even wrote a book on it. And we all took notice, because that term has been part of our lexicon ever since. It’s a nice way … Atomic Design Certification Course originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
Chrome 139 is experimenting with Open UI’s proposed Interest Invoker API, which would be used to create tooltips, hover menus, hover cards, quick actions, and other types of UIs for showing more information with hover interactions. A First Look at the Interest Invoker API (for Hover-Triggered Popovers) originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
In today's episode of Whiteboard Friday, Moz SEO expert Tom Capper walks you through cannibalization: what it is, how to identify it, and how to fix it.
Let's take an in-depth look at Moz.com title tags that were re-written by Google, including three case studies where we managed to fix bad rewrites.
See how, through several little tweaks to their conversion strategy, the team at Chromatix attracted a higher tier of customers, more inquiries, plus over $780,000 worth of new sales opportunities.
Miriam helps you get started in Google My Business Products with this illustrated tutorial, walking you through how to add your most important products and services.
In SEO, there are three main “bosses” with different needs: your business, your searchers, and your search engines. How do you answer to all of them?
The world of ecommerce continues to grow year by year. As consumers want to purchase items from the comfort of their own home and have them delivered in a timely manner, the opportunities in ecommerce are ever-expanding. The expectation for a good user experience is now higher than ever. Customer want to know they can trust the website they are buying from, they want to access the website fast, and they want a great offer.All of these things come into play when discussing the conversion rate of any ecommerce store. As an ecommerce store owner, your ultimate goal is to convert as many visitors as possible into paying customers. However, this can be a challenging task, especially in today's competitive ecommerce landscape. The good news is that there are proven strategies you can implement to improve your ecommerce conversion rate and boost your sales. In this article, we will explore ten of these strategies.Ecommerce conversion rate benchmarksFirst off, we'll start with some benchmarks. Conversion rates will vary based on industry, location, and device used. According to Monetate, the following conversion rate results were gathered for Q3 of 2021 until Q3 of 2022.As we can see, the UK has an average conversion rate that is significantly higher than the US, at 3.7 compared to 2.3%. The global conversion rate for Q3 of 2022 was 2.5% down from 2.6% in Q2.As mentioned above, the device used to access an ecommerce store also has an effect on the conversion rate. Below we see that traditional devices like a desktop have the highest conversion rate at 3.1%. Tablets come in at a close second with 2.8% and smartphones have a 2.2% conversion rate respectively.The lower conversion rate on mobile could be due to a variety of reasons:Screen size: The smaller screen size of mobile phones can make it more challenging for visitors to browse and view products. This can result in a less engaging user experience, leading to lower conversion rates.Navigation: The navigation of a mobile website can be different from that of a desktop or tablet website. Visitors may have difficulty finding what they are looking for, leading to frustration and a higher likelihood of abandonment.Checkout process: The checkout process on a mobile website may not be as optimized as that of a desktop or tablet website, making it more difficult for visitors to complete their purchase.Security concerns: Visitors may have concerns about the security of making a purchase on a mobile device, especially if they are using public Wi-Fi or are unsure of the security of the mobile website.Technical issues: Technical issues such as slow loading times or errors may be more prevalent on mobile websites, leading to a less positive user experience and lower conversion rates.To address these challenges and improve mobile conversion rates, it's essential to optimize your website for mobile devices. This can involve using responsive design to ensure that your website is optimized for different screen sizes, simplifying navigation, and streamlining the checkout process. Additionally, it's essential to prioritize website security and ensure that technical issues are addressed promptly to provide the best possible user experience on mobile devices.10 ways to optimize your ecommerce conversion rateBelow we've outlined 10 methods you should implement to increase your ecommerce conversion rate. Go through each suggestion and work on implementing it into your store if you haven't already. These methods have been proven time and time again to increase conversion rates and give your customers a better experience overall so that they will keep coming back.1. Optimize your website designThe first impression your website gives to visitors is critical in determining whether they will stay on your site or leave immediately. A poorly designed website can significantly impact your conversion rate. Your website design should be visually appealing, easy to navigate, and user-friendly.One of the critical elements of your website design is the layout. Ensure that your website is organized and has a clear hierarchy of information. Your navigation menu should be easy to use and intuitive. The layout should be responsive and work well on all devices, including desktops, tablets, and mobile phones.Another important design element is the color scheme. Colors can affect emotions and influence buying decisions. Choose colors that are in line with your brand and that appeal to your target audience. For instance, if you're selling products for kids, you might want to use bright and cheerful colors.2. Optimize your product pagesYour product pages are where visitors will decide whether to purchase your products or not. Therefore, it's essential to optimize your product pages for maximum conversion. Here are some tips:Write clear and concise product descriptions: Your product descriptions should be easy to read, informative, and highlight the benefits of your products.Include customer reviews and ratings: Customer reviews and ratings provide social proof and can increase trust in your brand.Display pricing clearly: Visitors should be able to see the price of your products clearly without having to search for them.3. Better product imagesProduct images play a major role in both building trust as well as branding. Nowadays, all ecommerce operators need to ensure that their images are high quality and optimized. Peep Laja, Founder of ConversionXL, is sure:Using quality images in your blog posts makes you sell more of your stuff.When taking product photos, ensure you're using a high quality camera so that your images can be zoomed in on while a potential buyer is browsing your website.Having high quality images on your site can however slow down the performance of your site overall as the image size grows. Therefore, taking advantage of things like smart image compression and converting your images to a newer format such as WebP are great ways to optimize image size while at the same time ensuring the quality of your images doesn't degrade.4. Create an attractive offerThe next conversion optimization tip is to creative an attractive offer. Consumer like to feel as though they're getting a great deal on something and as an ecommerce store owner its your responsibility to create an offer that your buyers will love. Here are a few common offers you can use:On sale (e.g. 15% off)Free shippingBuy 1 get 1Free, just pay shippingFree bonus included with every purchaseScarcity tactics can also be employed if you're running a promotion for a limited time or you only have a certain amount of stock available for a specific item. For example, you can say things like:Only X left in stockSale ends in X DaysDue to low stock, there is a limit of X per customerBig ecommerce brands like Etsy use similar tactics to let their customers know when other people are buying a particular item or when the stock is running out on a particular product. Here are a couple of examples from Etsy's website:Limited quantities availableProduct added to customer carts5. Simplify the checkout processThe checkout process is one of the critical stages in the ecommerce conversion funnel. A complicated and time-consuming checkout process can lead to cart abandonment and reduce your conversion rate. Here are some tips to simplify the checkout process:Use a progress indicator: A progress indicator shows visitors how far they are in the checkout process and gives them an idea of how much more time it will take to complete the purchase.Offer guest checkout: Not all visitors will want to create an account to make a purchase. Offer a guest checkout option to make the process faster and more convenient.Allow multiple payment options: Offer multiple payment options, including credit cards, PayPal, and other online payment systems, to cater to different preferences.Reduce the number of form fields: The more form fields a visitor has to fill, the more likely they are to abandon the checkout process. Reduce the number of form fields to the bare minimum required to complete the purchase.6. Implement live chat supportLive chat support can be a powerful tool in boosting your ecommerce conversion rate. It allows visitors to ask questions and get immediate answers, which can help to remove any doubts they may have about making a purchase. Live chat support can also help to improve customer satisfaction and loyalty.When implementing live chat support, ensure that it's easily accessible and visible on your website. Train your support agents to be friendly, helpful, and knowledgeable about your products and services.7. Improve site speedHaving fast site speed is imperative. Online users are becoming less and less patient meaning you as an ecommerce store owner need to implement methods for reducing latency and speeding up your website. There are many ways to speed up a slow website. We've written a comprehensive guide which gives 18 tips for website performance optimization.One thing any ecommerce store owner can do to instantly improve their global page speed is to implement a CDN. With a CDN, you can offload your static assets such as product images, videos, GIFs, CSS files, and much more to the CDN's edge servers. Then, when someone makes a request for one of your store's webpages they will be delivered content from the CDN's nearest edge server instead of your origin server. This reduces latency and speeds up your website.We've also written multiple articles on ways to speed up your website if you're using certain CMS platforms. Check them out below:Speed up DrupalSpeed up JoomlaSpeed up MagentoSpeed up PrestaShopSpeed up WordPress8. Optimize for mobileInitially, desktop computers were the primary means of accessing the Internet, with mobile devices being a secondary option. However, with the rise of smartphones and tablets, mobile Internet usage has surpassed desktop usage, and it continues to grow.According to a report by Statcounter, mobile devices accounted for approximately 59% of global internet traffic in 2023, with desktops accounting for approximately 39%. This represents a significant shift in the way people access the Internet and interact with online content.Source: ComscoreThe increased use of mobile devices for Internet browsing and online shopping has led to a shift in website design and optimization. Websites must now be designed to be responsive and optimized for mobile devices, to provide an optimal user experience on smaller screens. This includes simplified navigation, mobile-friendly content, and streamlined checkout processes.As many visitors to your ecommerce store will be using a mobile device, it's essential that your site is responsive and optimized for mobile just as well as it is for desktop. Google's PageSpeed Insights tool is a great way to check your speed score for both mobile and desktop devices.It will also give you suggestions on areas that can be improved for both mobile and desktop visitors. To learn more about what other site speed test tools are available, check out our complete guide on the top 15 free speed test tools.9. Use videos/GIFsImages are great to show a product however depending on the type of product you're selling it may also be beneficial to use videos and of GIFs to display your product. If you're selling a product that solves a problem than you are most likely going to benefit by having a video on your product page which shows how the product works. This allows the customer to visualize how the product does what you say it does. Otherwise, the consumer needs to visualize the functionality of the product themselves which isn't as persuasive.You can also use videos for products which don't solve a problem per-se (i.e. a clothing brand). In this case, you could have a video which displays the lifestyle of the product and the feeling you want to portray to your potential customer. Videos will hit home much harder with your website visitors and although they take more time and effort to produce, they do pay off.However, similar to images, it's important to your videos/GIFs remain optimized. Videos are one of the biggest contributors to web page size growth and although they are very valuable, can be detrimental to page speed if used improperly. That's why we've written articles on how to optimize the delivery of both your videos and GIFs so that you can implement both of them into your product pages without suffering the consequences of poor page speed.10. Implement abandon cart emails and retargetingOne of the best ways to increase conversion rates with people who are familiar with your site however haven't made a purchase yet is through abandoned cart emails and retargeting. Abandoned cart emails are fairly straightforward. Basically you can use a service like MailChimp or Klaviyo to automatically send emails to users who abandoned their cart. These emails can be spaced out throughout the day or across several days to remind your potential customers that they left something in their cart. A great way to bring customers back is to offer them a coupon code in the email so that they can get an even better deal if they complete their purchase.A typical abandoned cart email sequence is usually 3-4 emails long. Any more could cause the visitor to get annoyed and any less could be a missed opportunity.Additionally, you can retarget people who have visited your site if you're using paid advertising like Google AdWords or Facebook Ads. These advertising platforms allow you to define who you want to send retargeting ads to based on their browsing events and then show them ads that will entice the visitor to come back. Similar to abandon cart emails, retargeting ads typically include a discount code of some sort to further entice the visitor to come back.SummaryEcommerce is an ever-changing landscape and to keep up with the fast pace, store owners must be quick to implement best practices and be innovative so that their store doesn't fall behind the competition. Improving your ecommerce conversion rate requires a strategic approach that focuses on optimizing various elements of your website and ecommerce funnel.By implementing the ten strategies discussed in this article, you can increase your conversion rate, boost your sales, and improve customer satisfaction and loyalty. Remember to continuously test and optimize your website and ecommerce funnel to ensure that you're providing the best possible experience to your visitors and customers.
As a website owner, you want your website to be fast, efficient, and accessible to as many users as possible. One of the best ways to achieve this is by using HTTP caching headers. These headers tell web browsers and other HTTP clients how to cache and serve content from your website.This article highlights important information on HTTP caching headers and associated CDN behavior. In case you are looking for in-depth information on the role of HTTP cache headers in the modern web, here's everything you need to know.How caching worksWhen a browser requests a file from a server, the server responds with the file and some cache headers. The browser then caches the file based on these headers. The next time the browser requests the same file, it checks its cache to see if it already has a copy. If it does, and the file hasn't expired, the browser serves the cached version of the file. If the file has expired or if the browser has been told not to cache it, the browser requests a fresh copy of the file from the server.Caching works differently depending on the type of cache being used. There are two main types of caches: browser caches and CDN caches.Browser cachesBrowser caches are local caches that are used by web browsers to store copies of files. When a browser requests a file, it first checks its local cache to see if it already has a copy. If it does, and the file hasn't expired, the browser serves the cached version of the file. If the file has expired, or if the browser has been told not to cache it, the browser requests a fresh copy of the file from the server.CDN cachesCDN caches are distributed caches that are used by Content Delivery Networks (CDNs) to store copies of files. When a browser requests a file from a website that is using a CDN, the request is sent to the CDN instead of the origin server. If the CDN has a cached copy of the file, it serves it directly to the browser. This can greatly reduce the amount of time and resources needed to load the file, as the request doesn't need to travel all the way to the origin server.CDN caches can be configured in a number of different ways, depending on the needs of the website. Some CDNs use a "pull" model, where the CDN only caches files when they are requested by a browser. Other CDNs use a "push" model, where the origin server sends files to the CDN proactively before they are requested by a browser.What are HTTP cache headers?HTTP cache headers are instructions that web servers send to web browsers, telling them how to cache and serve content. These headers are sent with every HTTP request and response. They can be used to control how frequently a browser caches a file, how long the cache should keep the file, and what should be done when the file is expired.HTTP cache headers are important because they help reduce the amount of time and resources needed to load a web page. By caching content, a browser can serve it more quickly without having to request it from the server every time a user visits the page. This can improve website performance, reduce server load, and improve the overall user experience.Types of HTTP cache headersCaches work with content mainly through freshness and validation. A fresh representation is available instantly from a cache while a validated representation rarely sends the entire representation again if it hasn't changed. In cases where there is no validator present (e.g. ETag or Last-Modified header), and a lack of explicit freshness info, it will usually (but not always) be considered uncacheable. Let's shift our focus to the kind of headers you should be concerned about.1. Cache-ControlEvery resource can define its own caching policy via the Cache-Control HTTP header. Cache-Control directives control who caches the response, under what conditions and for how long.Requests that don't need server communication are considered the best requests: local copies of the responses allow the elimination of network latency as well as data charges resulting from data transfers. The HTTP specification enables the server to send several different Cache-Control directives which control how and for how long individual responses are cached by browsers among other intermediate caches such as a CDN.Cache-Control: private, max-age=0, no-cacheThese settings are referred to as response directives. They are as follows:public vs privateA response that is marked public can be cached even in cases where it is associated with an HTTP authentication or the HTTP response status code is not cacheable normally. In most cases, a response marked public isn't necessary, since explicit caching information (e.g. max-age) shows that a response is cacheable anyway.On the contrary, a response marked private can be cached (by the browser) but such responses are typically intended for single users hence they aren't cacheable by intermediate caches (e.g. HTML pages with private user info can be cached by a user's browser but not by a CDN).no-cache and no-storeno-cache shows that returned responses can't be used for subsequent requests to the same URL before checking if server responses have changed. If a proper ETag (validation token) is present as a result, no-cache incurs a roundtrip in an effort to validate cached responses. Caches can however eliminate downloads if the resources haven't changed. In other words, web browsers might cache the assets but they have to check on every request if the assets have changed (304 response if nothing has changed).On the contrary, no-store is simpler. This is the case because it disallows browsers and all intermediate caches from storing any versions of returned responses, such as responses containing private/personal information or banking data. Every time users request this asset, requests are sent to the server. The assets are downloaded every time.max-ageThe max-age directive states the maximum amount of time in seconds that fetched responses are allowed to be used again (from the time when a request is made). For instance, max-age=90 indicates that an asset can be reused (remains in the browser cache) for the next 90 seconds.s-maxageThe "s-" stands for shared as in shared cache. This directive is explicitly for CDNs among other intermediary caches. This directive overrides the max-age directive and expires header field when present. KeyCDN also obeys this directive.must-revalidateThe must-revalidate directive is used to tell a cache that it must first revalidate an asset with the origin after it becomes stale. The asset must not be delivered to the client without doing an end-to-end revalidation. In short, stale assets must first be verified and expired assets should not be used.proxy-revalidateThe proxy-revalidate directive is the same as the must-revalidate directive, however, it only applies to shared caches such as proxies. It is useful in the event that a proxy services many users that need to be authenticated one by one. A response to an authenticated request can be stored in the user's cache without needing to revalidate it each time as they are known and have already been authenticated. However, proxy-revalidate allows proxies to still revalidate for new users that have not been authenticated yet.no-transformThe no-transform directive tells any intermediary such as a proxy or cache server to not make any modifications whatsoever to the original asset. The Content-Encoding, Content-Range, and Content-Type headers must remain unchanged. This can occur if a non-transparent proxy decides to make modifications to assets in order to save space. However, this can cause serious problems in the event that the asset must remain identical to the original entity-body although it must also pass through the proxy.According to Google, the Cache-Control header is all that's needed in terms of specifying caching policies. Other methods are available, which we'll go over in this article, however, are not required for optimal performance.The Cache-Control header is defined as part of HTTP/1.1 specifications and supersedes previous headers (e.g. Expires) used to specify response caching policies. Cache-Control is supported by all modern browsers so that's all we need.2. PragmaThe old Pragma header accomplishes many things most of them characterized by newer implementations. We are however most concerned with the Pragma: no-cache directive which is interpreted by newer implementations as Cache-Control: no-cache. You don't need to be concerned about this directive because it's a request header that will be ignored by KeyCDN's edge servers. It is however important to be aware of the directive for the overall understanding. Going forward, there won't be new HTTP directives defined for Pragma.3. ExpiresA couple of years back, this was the main way of specifying when assets expire. Expires is simply a basic date-time stamp. It's fairly useful for old user agents which still roam unchartered territories. It is, however, important to note that Cache-Control headers, max-age and s-maxage still take precedence on most modern systems. It's however good practice to set matching values here for the sake of compatibility. It's also important to ensure you format the date properly or it might be considered as expired.Expires: Sun, 03 May 2015 23:02:37 GMTTo avoid breaking the specification, avoid setting the date value to more than a year.4. ValidatorsETagThis type of validation token (the standard in HTTP/1.1):Is communicated via the ETag HTTP header (by the server).Enables efficient resource updates where no data is transfered if the resource doesn't change.The following example will illustrate this. 90 seconds after the initial fetch of an asset, initiates the browser a new request (the exact same asset). The browser looks up the local cache and finds the previously cached response but cannot use it because it's expired. This is the point where the browser requests the full content from the server. The problem with it this is that if the resource hasn't changed, there is absolutely no reason for downloading the same asset that is already in the CDN cache.Validation tokens are solving this problem. The edge server creates and returns arbitrary tokens, that are stored in the ETag header field, which are typically a hash or other fingerprints of content of existing files. Clients don't need to know how the tokens are generated but need to send them to the server on subsequent requests. If the tokens are the same then resources haven't changed thus downloads can be skipped.The web browser provides the ETag token automatically within the If-None-Match HTTP request header. The server then checks tokens against current assets in the cache. A 304 Not Modified response will tell the browser if an asset in the cache hasn't been changed and therefore allowing a renewal for another 90 seconds. It's important to note that these assets don't need to be downloaded again which saves bandwidth and time.How do web developers benefit from efficient revalidation?Browsers do most (if not) all the work for web developers. For instance, they automatically detect if validation tokens have been previously specified and appending them to outgoing requests and updating cache timestamps as required based on responses from servers. Web developers are therefore left with one job only which is ensuring servers provide the required ETag tokens. KeyCDN's edge servers fully support the ETag header.Last-ModifiedThe Last-Modified header indicates the time a document last changed which is the most common validator. It can be seen as a legacy validator from the time of HTTP/1.0. When a cache stores an asset including a Last-Modified header, it can utilize it to query the server if that representation has changed over time (since it was last seen). This can be done using an If-Modified-Since request header field.An HTTP/1.1 origin server should send both, the ETag and the Last-Modified value. More details can be found in section 13.3.4 in the RFC2616.KeyCDN example response header:HTTP/1.1 200 OK Server: keycdn-engine Date: Mon, 27 Apr 2015 18:54:37 GMT Content-Type: text/css Content-Length: 44660 Connection: keep-alive Vary: Accept-Encoding **Last-Modified: Mon, 08 Dec 2014 19:23:51 GMT** **ETag: "5485fac7-ae74"** **Cache-Control: max-age=533280** **Expires: Sun, 03 May 2015 23:02:37 GMT** X-Cache: HIT X-Edge-Location: defr Access-Control-Allow-Origin: * Accept-Ranges: bytesYou can check your HTTP Cache Headers using KeyCDN's HTTP Header Checker tool.5. Extension Cache-Control directivesApart from the well-known Cache-Control directives outlined in the first section of this article, there also exists other directives which can be used as extensions to Cache-Control resulting in a better user experience for your visitors.immutableNo conditional revalidation will be triggered even if the user explicitly refreshes a page. The immutable directive tells the client that the response body will not change over time, therefore, there is no need to check for updates as long as it is unexpired.stale-while-revalidateThe stale-while-revalidate directive allows for a stale asset to be served while it is revalidated in the background.A stale-while-revalidate value is defined to tell the cache that it has a certain amount of time to validate the asset in the background while continuing to deliver the stale one. An example of this would look like the following:Cache-Control: max-age=2592000, stale-while-revalidate=86400Learn more about the stale-while-revalidate directive in our stale-while-revalidate and stale-if-error guide.stale-if-errorThe stale-if-error directive is very similar to the stale-while-revalidate directive in that it serves stale content when the max-age expires. However, the stale-if-error only returns stale content if the origin server returns an error code (e.g. 500, 502, 503, or 504) when the cache attempts to revalidate the asset.Therefore, instead of showing visitors an error page, stale content is delivered to them for a predefined period of time. During this time it is the goal that the error has been resolved and that the asset can be revalidated.Learn more about the stale-if-error directive in our stale-while-revalidate and stale-if-error guide.KeyCDN and HTTP cache headersAt KeyCDN, we understand the importance of HTTP cache headers and their role in optimizing website performance. KeyCDN allows you define your HTTP cache headers as you see fit. The ability to set the Expire and Max Expire values directly within the dashboard makes it very easy to configure things on the CDN side.Furthermore, if you rather have even more control over your HTTP cache headers you can disable the Ignore Cache Control feature in your Zone settings and have KeyCDN honor all of your cache headers from the origin. This is very useful in the event that you need to exclude a certain asset or group of assets from the CDN.TL;DRThe Cache-Control (in particular), along with the ETag header field are modern mechanisms to control freshness and validity of your assets. The other values are only used for backward compatibility.ConclusionHTTP cache headers are an important tool for improving website performance and reducing server load. By properly configuring cache headers, you can ensure that your files are cached and served efficiently, without sacrificing freshness or reliability. Remember to set appropriate cache control and expiration headers, consider using ETag headers, and test your headers to ensure that they are working correctly. By following these best practices, you can create a fast, reliable, and efficient website that delivers a great user experience.Do you have any thoughts on using HTTP cache headers? If so we would love to hear them below in the comments.
As a content delivery network (CDN) provider, we understand the importance of website security. One of the most popular content management systems (CMS) out there is WordPress, and unfortunately, it is also one of the most targeted platforms for cyber attacks. In this blog, we will be discussing the different security threats that WordPress websites face and how to fix them.WordPress is the most popular content management system (CMS) on the Internet today. There are around 810 million sites running on WordPress, and around half of those are hosted on the free WordPress.com site. The rest are hosted on private servers.There is a reason so many CMS-based sites use WP. WordPress is a smart and intuitive platform that nearly anyone can learn to use. There are numerous plugins and themes available to help website owners customize the look and features of a site. Plus, those who understand coding can easily customize their sites even further.However, WP is also susceptible to a few security threats. Hackers love to go in through the backdoor of your WP site and attempt to set up residence there. Fortunately, if you are aware of the most common security threats, then you can easily fix them and prevent hackers from taking over your site.Below are the top 7 WordPress security threats and how to fix them.1. Password hackingYou've probably noticed that most sites requiring a password now require you to create a strong password with capitals, lower case, numbers and special characters. The more complicated you can make the password (but still remember what it is), the less chance hackers have of breaking into your site.Understand that hackers often use bots and can try dozens of passwords in seconds. If your password is easy to crack, you can be certain they can and will crack your password. Creating a strong password includes tips such as:Not using the same password for everythingMaking the password at least 12 characters longMaking sure all your devices used to sign in are secure (two-factor authentication helps)2. SQL injectionsBecause WordPress runs on a database, it also uses PHP server side scripts. While this works well to deliver content quickly and create a WYSIWYG environment, it also makes your WP site open to URL insertions.SQL injection attacks occur when an attacker inserts malicious SQL code into a website's database. The malicious code can be used to access sensitive information or even take control of the website. SQL injection attacks can occur when websites use outdated software, poorly written code, or if user input is not validated properly.A few methods to help prevent SQL injections include:Update to the latest version of WordPress. Any versions below the most current may be vulnerable to SQL injections.Use a site such as WordPress Security Scan to find vulnerabilities in your site and then fix them. The basic scan is free and will identify common errors, but you can also upgrade to a premium scan to check for lesser-known vulnerabilities.Update to the latest version of PHP that your web hosting server allows. The more up to date the PHP, the less vulnerable your WordPress site will be to hacking.Update plugins. Many vulnerabilities are found in plugins and themes, so make sure you update to the latest version. Also, pay attention to the last time the creator updated the plugin or theme. If they no longer offer updates, switch to a different plugin that does.3. Database attacksBecause MySQL is the most common database used, it is also a target for hackers. When you use your server's one-click or easy install features, the default database prefix is wp_. Using this prefix means that the hacker knows the prefix of your database.If you are just setting up your WP site, it is simply a matter of changing the database prefix. However, if you already have an established WP site, you'll need to go in and make some changes to use a different prefix. You can change the prefix to your database fairly easily, though, by following these steps.Backup your database in case there is an issue when making changes. This allows you to easily restore the site if there is an error.Go to your root directory for your WordPress installation (you can use PHP or some servers allow access to files via the control panel) and open the wp-config.php file. Look for a line that reads: $table_prefix = 'wp_';Replace wp_ with wp_78398 (Use numbers of your choice and make them random. You can also use letters). Save and close the file.Open your database through phpMyAdmin or similar program. If your server uses cPanel, then look for the phpMyAdmin button.Click on the tab that says SQL and use the following query (see below). You also can simply change each prefix manually, but if you have a lot of tables this is time-consuming. Note that you need to change 78398 to the numbers or letters or combination of that you personally used.RENAME table `wp_commentmeta` TO `wp_78398_commentmeta`;RENAME table `wp_comments` TO `wp_78398_comments`;RENAME table `wp_links` TO `wp_78398_links`;RENAME table `wp_options` TO `wp_78398_options`;RENAME table `wp_postmeta` TO `wp_78398_postmeta`;RENAME table `wp_posts` TO `wp_78398_posts`;RENAME table `wp_terms` TO `wp_78398_terms`;RENAME table `wp_termmeta` TO `wp_78398_termmeta`;RENAME table `wp_term_relationships` TO `wp_78398_term_relationships`;RENAME table `wp_term_taxonomy` TO `wp_78398_term_taxonomy`;RENAME table `wp_usermeta` TO `wp_78398_usermeta`;RENAME table `wp_users` TO `wp_78398_users`;You now need to fix any options. Use this query and fix any lines that pop up by changing to the new prefix you've chosen:SELECT * FROM `wp_78398_options` WHERE `option_name` LIKE '%wp_%'Finally, search usermeta for wp_ prefixes. Use this query:SELECT * FROM `wp_78398_usermeta` WHERE `meta_key` LIKE '%wp_%'Remember that you need to plug in what numbers or letters you chose in place of 78398.Save the changes and check to make sure everything is working. You should create a second backup of the site with the new prefixes in place, but don't discard the original in case something breaks. It's always a good idea to keep a backup anytime you make any type of major change to your site.4. Brute force attacksA brute force attack is when an attacker uses automated tools to try to guess the correct username and password combination to gain access to a website. Hackers use dictionaries of commonly used passwords or try every possible combination of characters until they get the right one. Brute force attacks can cause a website to crash, allow attackers to steal sensitive information, or even take control of the website.Fortunately, this is a pretty easy security threat to stop.Install the plugin Limit Login Attempts Reloaded. This plugin not only stops someone from a brute force attack, which can also slow down your website and eat up bandwidth, but it will completely lock an IP out of your site for attempting too many passwords in a short amount of time.Install a security plugin. Many of today's security plugins come with a firewall that blocks anyone attempting suspicious activity on your site. One good one is All in One WordPress Security and another is Wordfence. However, there are a number of options, so choose the one that works best for you and is affordable.There are some more advanced tactics you can use, such as .htaccess password protection, but start with the plugins and if that doesn't stop the attacks you can get more in-depth with your protection levels. You can also change the default admin name to better protect your site.You can also change your username using the tutorial at Hostinger.5. Hijacking an open userIf multiple people work on your site, there is a security risk for each one. If the person logs in and then walks away from their computer, it is vulnerable to anyone in the vicinity. This could be a problem in a shared workspace, for example. If that person's computer gets hijacked, your site could be vulnerable as well.Install the Inactive Logout plugin.Choose the settings that make sense for your site. You can set the length of time the person is inactive before you log them out and even the message they receive when being logged out.6. Cross-site scripting (XSS)Cross-Site Scripting (XSS) attacks occur when an attacker injects malicious code into a website, which is then executed in a user's browser. The malicious code can be used to steal sensitive information or take control of the website. XSS attacks can occur when websites use outdated software, poorly written code, or if user input is not validated properly.To fix this issue, you should always use the latest version of WordPress and all plugins, and ensure that all code used on the website is properly written and validated. Additionally, you can use plugins like Anti-Malware Security and Brute-Force Firewall to scan your website for any vulnerabilities.7. DDoS AttacksA Distributed Denial of Service (DDoS) attack is when an attacker floods a website with traffic, causing it to crash or become unavailable. DDoS attacks can be carried out by using a network of infected computers, also known as a botnet. These botnets can be used to overwhelm a website with traffic, making it inaccessible to users.You can protect yourself from this type of attack by using CDN services like ours to mitigate DDoS attacks. We have a global network of servers that can absorb and distribute traffic, ensuring that your website remains online even during an attack. Additionally, website owners can use plugins like Wordfence Security to block malicious traffic and reduce the risk of DDoS attacks.Keeping your site secureNow that we have discussed the most common security threats to WordPress websites let's highlight the most important measures on how to fix them.Keep your WordPress website up to dateCurrently, almost 61% of WordPress users use the latest version. The statistics also show, for example, that over 3% (this corresponds to about 26,730,000 users!) use a version that has been outdated for about five years.As we mentioned earlier, one of the most common reasons for WordPress websites to be hacked is the use of outdated software.To prevent this, it is essential to keep your WordPress website and all plugins up to date. WordPress updates often contain security patches, and plugin updates often fix security vulnerabilities. By keeping everything up to date, you reduce the risk of your website being hacked.Use strong passwordsUsing strong passwords is essential to protect your WordPress website from brute force attacks. Strong passwords should be at least 12 characters long, and should include a combination of letters, numbers, and symbols. Avoid using easy-to-guess passwords like "password" or "123456". You can use password managers like LastPass or Dashlane to generate and store strong passwords.Install security pluginsThere are many security plugins available for WordPress that can help you protect your website from various types of attacks. Some of the most popular security plugins include Wordfence Security, iThemes Security, Sucuri Security, and Anti-Malware Security and Brute-Force Firewall. These plugins can scan your website for malware, block malicious traffic, and enforce strong passwords.Use a content delivery network (CDN)Using a CDN can help protect your WordPress website from DDoS attacks. The global network of servers can absorb and distribute traffic, ensuring that your website remains online even during an attack. Additionally, using a CDN can improve your website's performance, as it caches content and serves it from a server closer to the user.Backup your website regularlyBacking up your website regularly is essential in case of a security breach or other catastrophic event. If your website is hacked, you can restore it from a backup to minimize downtime. Most hosting providers offer backup services, but you can also use plugins like UpdraftPlus or BackupBuddy to back up your website to a cloud storage service like Google Drive or Dropbox.ConclusionWordPress websites are vulnerable to various types of security threats, including brute force attacks, SQL injection attacks, Hijacking, XSS attacks, Database attacks, and DDoS attacks. To protect your WordPress website, you should keep everything up to date, use strong passwords, install security plugins, use a CDN, and backup your website regularly.By following these best practices, you can reduce the risk of your website being hacked and ensure that your users' data remains secure. As a CDN provider, we are committed to helping you protect your website and provide a fast, secure, and reliable user experience.
If you are building a website or application and wondering whether to use icon fonts or SVGs, you have come to the right place. In this article, we will explore the pros and cons of each option and help you decide which one is the best fit for your project.Graphical icons are a crucial component of almost every website or app. Although icons are typically small in size by nature, selecting a format for your web icons is not a trivial decision. Aside from the standard image formats, web developers have two main options: SVGs or icon fonts. Which one should you use? Let's see how the two formats compare in terms of performance, flexibility, and accessibility.The evolution of web iconsWeb icons have come a long way since the early days of the internet. In the time before CSS, web icons had to be images. Because image files are large, web developers have always tried to find alternative methods to display small icons.In the early 2000s, icons were often simple, pixelated graphics that were used primarily for navigation and to indicate links. As web design evolved, so did the use and design of icons.One of the first major changes in the evolution of web icons was the introduction of icon fonts. Icon fonts were first introduced in 2010 and quickly gained popularity as a way to easily incorporate icons into web design. They offered a lightweight and scalable alternative to using images for icons.As web design continued to evolve, the use of SVGs (Scalable Vector Graphics) became more prevalent. SVGs allowed for more design flexibility and could be easily scaled without losing quality. This made them a popular choice for creating custom icons and graphics.The introduction of flat design in the mid-2010s also had a significant impact on the evolution of web icons. The flat design emphasized simplicity and minimalism, with a focus on using simple shapes and bold colors. This led to the widespread use of simple, minimalist icons that were easy to recognize and visually appealing.More recently, the trend towards using animated icons has become more prevalent. Animated icons can add an element of interactivity and engagement to web design, making them a popular choice for websites and applications.Another recent development in the evolution of web icons is the use of 3D graphics and isometric design. These styles add depth and dimension to icons, making them more visually interesting and engaging.What are icon fonts?Icon fonts are text files that can be modified using CSS. Consequently, they scale much better than raster images, so changing the size of an icon font doesn't degrade its visual quality. Changing the color or adding a shadow is just as simple as editing text. You can easily find free icon fonts to use on your website, or you can design your own. One downside of using icon fonts is that most font sets contain icons that you probably won't use, so they will just be taking up space.Like CSS sprites before them, icon fonts are starting to fall out of favor with developers. Properly displaying icon fonts often requires the browser to make additional requests to the server, which can lead to FOIT, or flash of invisible text, on icons while the font libraries are still loading. If the browser cannot interpret the fonts, then the user will just see empty characters. Since such scenarios are unacceptable for professional websites, more developers are now turning to SVGs.What are SVGs?SVGs (Scalable Vector Graphics) allow vector graphics to be displayed in the browser. SVGs are quickly becoming the new standard for web icons and animations. They not only offer superior scaling, but they often render more quickly and reliably than icon fonts. Since vector graphics are composed entirely of code, they don't have to be imported from large external files. They are also much smaller in size than your typical JPG or PNG as well as most icon font libraries.Making the most of your SVGs may necessitate overcoming a bit of a learning curve, but the rewards are well worth the effort.How SVGs workIt's possible to use SVGs like within a regular <img> element in your HTML utilizing the width and height attributes to adjust the dimensions. However, this method somewhat limits your ability to customize your SVGs.If you want the ability to further customize your SVG icons directly from within the HTML, you'll need to inline your SVG by simply pasting the code directly into your HTML document. Then, you can change the color or apply filters by targeting it with CSS. Here is what an example SVG icon looks like:<svg version="1.1" baseProfile="full" width="300" height="200" xmlns="http://www.w3.org/2000/svg"> <rect width="100%" height="100%" fill="#3686be" /> <circle cx="150" cy="100" r="80" fill="white" /> <text x="150" y="115" font-size="35" text-anchor="middle" fill="#3a3a3a">KeyCDN</text> </svg>The code above, inspired by Mozilla, displays a graphic like this in a browser:Although SVG code can seem intimidating at first glance, designing and controlling SVG icons is deceptively easy. In fact, you can just use a program like Adobe Illustrator to create your own vector graphics to use as icons. Just save them as SVG files, or you can generate the code within the Illustrator interface. You can also export drawings from Google Docs as SVG files.Are icon fonts still useful?Icon fonts are far from obsolete. While they are not always the most efficient nor the most reliable option, icon fonts are still relatively simple and easy to implement therefore many developers continue to use them. Depending on the number of icons used, It may not be worth the effort to switch out icon fonts for SVGs on your older projects; however, SVGs are the definitive way of the future, so you might as well get comfortable using them going forward.Comparing SVGs vs icon fontsTo help you decide which icon format to choose, let's see how the two options compare in various departments.Advantages of icon fontsIcon fonts have been around for a while and have been a popular choice for displaying icons on websites and applications. Here are some of their advantages:Easy to use: Icon fonts are easy to use and require minimal setup. All you need to do is include the font files in your project and use CSS to display the icons. You can even customize the icons using CSS, such as changing the color, size, and other properties.Lightweight: Icon fonts are lightweight and do not add much to your website's page load time. Since the icons are encoded as font glyphs, they are essentially text and do not require separate image files to be loaded.Widely supported: Icon fonts are widely supported by browsers and can be used on virtually any device or platform. They are also compatible with screen readers and other assistive technologies, making them accessible to users with disabilities.Disadvantages of icon fontsHowever, there are some drawbacks to using icon fonts:Limited customization: While icon fonts can be customized using CSS, they are limited in terms of design flexibility. You are limited to the predefined set of icons included in the font, and you cannot create your own custom icons.Quality issues: Some icon fonts may suffer from quality issues, such as jagged edges or pixelation, particularly at smaller sizes. This can be especially noticeable on high-resolution screens.Accessibility concerns: While icon fonts are generally accessible, there are some concerns around their use. Since they are encoded as font glyphs, screen readers may have difficulty identifying them as images, and users may not be able to access alternative text descriptions.Advantages of SVGsSVGs have become increasingly popular in recent years and are now a common choice for displaying icons on websites and applications. Here are some of their advantages:Design flexibility: SVGs offer more design flexibility than icon fonts, allowing you to create your own custom icons and graphics. You can also apply advanced effects and animations to SVGs using CSS or JavaScript.Scalability: SVGs are scalable and can be resized without losing quality. This makes them ideal for use on high-resolution screens or in responsive designs, where icons need to be resized depending on the device or screen size.Accessibility: SVGs are more accessible than icon fonts, as they can be easily identified by screen readers and other assistive technologies. You can also provide alternative text descriptions for SVGs, making them accessible to users with disabilities.Disadvantages of SVGsHowever, there are also some drawbacks to using SVGs:Complex setup: Setting up SVGs can be more complex than icon fonts, particularly if you are creating your own custom icons. You may need to use specialized software or tools to create and optimize your SVGs.Larger file sizes: SVGs can have larger file sizes than icon fonts, particularly if they include complex graphics or animations. This can impact your website's page load time and performance.Browser support: While SVGs are supported by most modern browsers, some older browsers may not support them fully. This can result in inconsistent rendering or display issues for some users.Which one should you use?So, which option should you choose? It ultimately depends on your project's specific requirements and constraints. Here are some key factors to consider when deciding between icon fonts and SVGs:1. SizeIf you choose to inline your SVGs in order to add styles, they can quickly increase in size, and the code can become quite cumbersome. It's also worth noting that inline SVG code doesn't get cached by the user's browser. External SVG files, on the other hand, can be cached. If you have a lot of icons on a single page, then icon fonts may provide a smoother user experience than inline SVGs. Of course, if you're using a premade icon font set, then you will probably be wasting resources on unused icons.The thing that's worth noting here is that 10 optimized SVG icons will likely be much smaller than an entire icon library. However, if you create your own custom icon library with only the icons you need, the icon library font will end up being smaller.2. PerformanceIcon fonts can be cached therefore making them load faster directly from the browser. However, the downside to this is that they create an additional HTTP request. On the other hand, if you're inlining SVG icons there are no additional HTTP requests needed, however these cannot be cached by the browser.You can however include your SVG files in an external file thus making them cacheable by the browser. Again, performance-wise the difference in speed will depend on how large your icon font/SVGs are. Try running performance tests with both to determine which one loads faster.3. FlexibilityBoth formats can be styled using CSS, but inline SVGs give you far more options such as glyphs strokes and multicolored icons. You can even have animated web icons.4. Browser supportWhichever format you choose for icons, you may have to perform some extra steps to make them compatible with older browsers. Since they've been around longer, icon fonts are more widely supported. Anyone using IE 6 or higher, which likely includes everyone, should be able to see your icon fonts. If you use SVGs, then you might want to include a JS polyfill to support those using IE 8 or lower.However, as most users have moved away from legacy browser versions, this shouldn't be much of a concern, regardless of whether you choose icon fonts or SVGs. The only lack of support in terms of SVG icons and modern browsers comes from IE, which doesn't properly scale SVG files (height and width attributes are recommended).5. ScalabilityAlthough both SVGs and icon fonts are vector-based, browsers interpret icon fonts as text, which means they are subject to anti-aliasing. Therefore, SVGs tend to look a little sharper than icon fonts.6. PositioningBecause icon fonts must be inserted via a pseudo element, positioning font icons can be tricky. You may have to consider the line-height, vertical-align, and letter-spacing among other factors to get the pseudo element and the actual glyph to match up perfectly. For SVGs, you just have to set the size.7. AccessibilityIf accessibility is a top priority for your project, SVGs may be the better choice. Unlike icon fonts, SVGs have built-in semantically-sensible elements, so you don't have to include any workarounds to make your fonts accessible to screen readers.SVG icon tools and resourcesMozilla's Developer Network has a very thorough SVG tutorial that explains how to stylize your icons with inline CSS. In addition to Adobe Illustrator, there are several tools to help you implement SVG icons. IcoMoon is an excellent resource for premade SVGs and font icons, and the IcoMoon app allows you to create your own. If you're looking for something open source, Inkscape is a free vector drawing program that exports SVG files. Tools like Convertio allow you to convert other image formats to SVG files.It's important to note that programs like Illustrator and Inkscape often embed extra information into exported SVG files that you don't need. Therefore, you should run your SVG icons through an optimization tool like SVGO or the SVG Minifier to trim them down before adding them to your website.Apart from the resources mentioned above, there are also a variety of icon websites out there which provide high quality vectors as either a paid or free service. Check out our complete icon library resources guide as well as our post on improving the speed of your glyphicons through using a glyphicon CDN.SummaryThere is still some debate in the community regarding whether or not icon fonts are better than SVGs or vice-versa. The truth is, what makes one or the other "better", in some cases, depends on the circumstance in which it is being used.However, more often than not, SVGs are the preferred method. They're much more scalable, offer a better user experience, and are supported by all major browsers. Even a few of the top web performance experts say that moving away from icon fonts in favor of SVGs is essential.Let us know your thoughts in the comments below. Are you using icon fonts, SVG icons, or a combination of both?
As a web developer, you must be always on the lookout for ways to improve the performance of your applications. With the increasing demand for faster and more efficient web applications, optimizing PHP performance has become a critical aspect of web development. In this blog, we will explore some of the best practices and tips for improving PHP performance for web applications. The best tool for improving PHP performance isn't any individual program; it's knowing which problems to look for and how to address them. This guide will cover everything you need to know to ensure that your PHP applications always run smoothly.A brief history of PHPPHP is a scripting language invented by Rasmus Lerdorf in 1995. Initially intended for the developer's personal use, "PHP" was originally an acronym for "Personal Home Page." Lerdorf initially developed PHP as a set of Common Gateway Interface (CGI) scripts for tracking visitors to his personal website. Over time, he added more features to the language, such as dynamic generation of HTML pages, and released it as an open-source project in 1995.In 1997, two developers, Andi Gutmans and Zeev Suraski, rewrote the core of PHP and transformed it into a more robust and efficient language. This new version of PHP, known as PHP 3, gained popularity quickly and became widely used for developing dynamic web pages.Since then, PHP has continued to evolve and improve, with the addition of new features such as improved object-oriented programming, better security features, and improved performance. Today, PHP is one of the most widely used server-side scripting languages, powering some of the biggest websites on the internet, including Facebook, Wikipedia, and WordPress.In recent years, PHP has also seen the introduction of new versions. In 2015, PHP 7.0 was released with updates including improvements to the Zend Engine and an overall reduction in memory use. At the time of writing this article, the newest available version is PHP 8.2, which was announced in December of 2022. The PHP Classes website contains extensive details about all of the changes made in PHP 8.2.What exactly is good PHP performance?Performance and speed are not necessarily synonymous. Achieving optimal performance is often a balancing act that requires trade-offs between speed, accuracy, and scalability. For example, while building a web application, you may have to decide between prioritizing speed by writing a script that loads everything into memory up front or prioritizing scalability with a script that loads data in chunks.Based on a representation from phplens, the image below depicts the theoretical trade-off between speed and scalability:The red line represents a script optimized for speed, and the blue line is a script that prioritizes scalability. When the number of simultaneous connections is low, the red line runs faster; however, as the number of users grows, the red line becomes slower. The blue line also slows down when traffic rises; however, the decline isn't as drastic, so the script tuned for speed actually becomes slower than the script tuned for scalability after a certain threshold.A real world analogy would be the comparison between a sprinter and a cross-country runner. Sprinters are much faster when running short races, but they tire out in longer competitions. Cross-country runners keep a slower but more consistent pace, which allows them to conserve energy and travel longer distances. The two athletes are better suited for different situations. Likewise, some scripts work better in different scenarios. Choosing the right one for your application will require careful consideration of your users. You may have to adjust scripts over time as your traffic increases.When to begin optimizing PHP codeExperienced programmers sometimes save the fine-tuning of code for the end of a project cycle. However, this is only advisable if you are certain of your PHP application's performance parameters. A more sensible approach is to conduct tests during the development process; otherwise, you may find yourself rewriting large chunks of code to make your application function properly.Before you start designing a PHP application, run benchmarks on your hardware and software to determine your performance parameters. This information can guide your coding by helping you weigh the risks and benefits of specific trade-offs. Be sure to use adequate test data, or else you could create code that doesn't scale.Tips for optimizing PHP scriptsWriting good code is the essential first step to creating PHP applications that are fast and stable. Following these best practices from the beginning will save time on troubleshooting later.1. Take advantage of native PHP functionsWherever possible, try to take advantage of PHP's native functions instead of writing your own functions to achieve the same outcome. Taking a little while to learn how to use PHP's native functions will not only help you write code faster, but will also make it more efficient.2. Use JSON instead of XMLSpeaking of which, native PHP functions such as json_encode() and json_decode() are incredibly fast, which is why using JSON is preferable to using XML. If you are committed to XML, be sure to parse it using regular expressions rather than DOM manipulation.3. Cash in on caching techniquesMemcache is particularly useful for reducing your database load while bytecode caching engines like OPcache are great for saving execution time when scripts get compiled.4. Cut out unnecessary calculationsWhen using the same value of a variable multiple times, calculate and assign the value at the beginning rather than performing calculations for every use.5. Use isset()Compared to count(), strlen(), and sizeof(), isset() is a faster and simpler way to determine if a value is greater than 0.6. Cut out unnecessary classesIf you don't intend on using classes or methods multiple times, then you don't really need them. If you must employ classes, be sure to use derived class methods as they are faster than methods in base classes.7. Turn off debugging notificationsAlerts that draw your attention to errors come in handy during the coding process, but they become just one more process that slows you down after launch. Disable such notifications before going live.8. Close database connectionsUnsetting variables and closing database connections in your code will save precious memory.9. Limit your database hitsMaking queries aggregate can reduce the number of hits to your database, which will make things run faster.10. Use the strongest str functionsWhile str_replace is faster than preg_replace, the strtr function is four times faster than str_replace.11. Stick with single quotesWhen possible, use single quotes rather than double quotes. Double quotes check for variables, which can drag down performance.12. Try three equal signsSince === only checks for a closed range, it is faster than using == for comparisons.Types of bottlenecks that affect PHP performanceTinkering with your scripts can certainly be beneficial. However, there are also issues that have nothing to do with code which can also hinder PHP performance. This is why developers need a thorough understanding of their server's subsystems to identify and address bottlenecks. Below are areas you should check if you're having performance issues.1. The networkOne obvious source of bottlenecks are networks. Depending on your current network's capacity, it may lack the power to handle the amount of data being transmitted.2. The CPUTransmitting plain HTML pages across a network doesn't drain your CPU, but PHP applications do. Depending on your requirements, you may at least a server with multiple processors to process your PHP code efficiently.3. Shared memoryA lack of shared memory can disrupt inter-process communication, which can lead to lagging performance.4. The filesystemYour filesystem can become fragmented over time. A file cache that uses RAM can speed up disk access so long as there is enough memory.5. Process managementMake sure your server isn't overburdened with unnecessary processes. Remove any unused networking protocols, antivirus scanners, mail servers and hardware drivers. Running PHP in multi-threaded mode can also result in better response times.6. Other serversIf your application depends on outside servers, a bottleneck on the other server can slow you down. There is not much you can do in such scenarios, but you can make alterations on your side to mitigate deficiencies on the other end.More tips for improving PHP performance1. Take advantage of OPcacheSince PHP is interpreted into executable code extemporaneously, programmers don't have to pause to compile code every time they make a small change. Unfortunately, recompiling identical code every time it runs on your website slows performance, which is why opcode cache, or OPCache is very useful.OPcache is an extension that saves compiled code into memory. Therefore, the next time the code executes, PHP will check timestamps and file sizes to determine if the source file has been altered. If it has not, the cached code will run.The image below shows the difference in execution time and memory usage between a PHP application running with no cache, OPcache, and eAccelerator (another PHP caching tool).Source: PrestaShop2. Identify database delaysAs discussed above, performance problems are not always caused by code. Most bottlenecks occur when your application must access resources. Since the data access layer of a PHP application can account for up to 90 percent of execution time, one of the first steps you should take is to look at all instances of database access in your codebase.Make sure slow SQL logs are turned on to help you identify and address slow SQL queries, and then query the queries to assess their efficiency. If you discover that too many queries are being made, or if you find that the same queries are being made several times during a single execution, you can make adjustments that boost your application's performance by cutting down on database access time.3. Clean up your filesystemSkim your filesystem for inefficiencies, and make sure the filesystem isn't being used for session storage. Most importantly, keep an eye out for code that can trigger a file stat such as file_exists(), filesize(), or filetime(). Leaving any of these functions in a loop can lead to performance issues.4. Carefully monitor your APIsMost web applications that depend upon external resources leverage remote APIs. Although remote APIs are out of your control, there are actions you can take to mitigate problems stemming from API performance. For example, you can cache API output or make API calls in the background. Establish reasonable timeouts for API requests and, if possible, be prepared to display output without an API response.5. Profile your PHPUsing OPcache and managing your external resources should be enough to make most applications run smoothly; however, if you find your needs increasing, it might be time to profile your PHP. A full PHP code profile can be time-consuming, but it can supply you with in-depth information about your application's performance. Thankfully, there are a handful of open source programs for profiling your PHP code such as Xdebug.The importance of monitoring PHP performanceYour web application may be running fine at one minute, but a sudden barrage of traffic can cause your application to crash if you're unprepared. Of course, making changes always requires time, effort and money, and it can be difficult to tell if the investment is worth it. The best way to make informed decisions is to continually collect data.PHP performance monitoring software like New Relic, Logtail, or PHP Server Monitor help you immediately measure the effects of any changes you make. Of course, knowing what to measure is equally important. Speed and memory usage are considered the best indicators of performance because they impact page load times, which are critical to web applications.While data collection is important, you should turn off your monitoring system when you don't need it because an influx of logs can slow things down. Of course, such logs give you valuable information about how to improve performance, so you should periodically monitor during peak traffic periods.The future of PHP performanceThe future of PHP performance looks promising. With each new version of PHP, the language continues to evolve and improve, making it faster, more efficient, and more secure. The PHP development community is constantly working to optimize the language and make it better suited to modern web development needs.In the future, we can expect to see continued improvements in performance, with a focus on making PHP even faster and more efficient. This could be achieved through the use of new technologies, such as Just-In-Time (JIT) compilation, and the implementation of new features and optimizations. Additionally, the PHP development community is also likely to continue to focus on improving the security of the language, to ensure that PHP-powered applications remain safe and secure.When building web applications, remember that what works today might not work next year. You may have to make adjustments to maintain consistent PHP performance. Focusing on the big picture during the entire development process is the best strategy for building PHP apps and websites that work for the masses.
We're making some important changes to our global test servers.
Moving from Lighthouse 11.0.0 to 12.3.0 saw some relevant changes to CLS scoring methodology as well as various audits.
We explain what CrUX data is and why any website owner should care about it.   Affiliate Notice: You may find affiliate links to such products below – If you decide to purchase them through the links provided, we may be paid a commission at no extra cost to you. We only recommend products we’ve […]
In this guide, we show you how to view and interpret CrUX data in the GTmetrix Report.   Overview The CrUX tab in the GTmetrix Report displays real user data for your page, derived from Google’s Chrome User Experience Report (i.e., CrUX). This data provides insights into how real users experience your page in the […]
We've brought field data (CrUX) into the GTmetrix Report in this release!
"Should I use synthetic monitoring, real user monitoring, or CrUX?" We hear this question a lot. It's important to know the strengths and limitations of each monitoring tool and what they’re best used for, so we don’t miss out on valuable insights. This post includes: How synthetic and real user monitoring (RUM) work What is CrUX? Is CrUX a substitute for RUM? When and why to use each tool An obscure cheese metaphor Plus a quick survey question at the end! Synthetic and real user monitoring For the longest time, there have been two primary forms of front-end web performance data. Synthetic data (sometimes called lab data) is performance data collected based on a very specific set of variables. For example, you can choose to test your product page on a 3G network, from Italy, on a Chrome desktop browser. Real user data (sometimes called field data) is performance data collected from real users as they browse your site under a wide variety of contexts (different browsers, devices, connection speeds, locations, etc.). Or to put it simply: Synthetic data helps you understand your pages. RUM helps you understand your users. There's never been much merit to the debate of real user monitoring (RUM) versus synthetic monitoring. They both have different uses and most companies should really be using both. Because synthetic data is so controllable and repeatable, you have a very clean set of data to look at. This makes it easy to spot regressions and improvements. You can also capture a ton of detail on each test, empowering your teams to dig deep into understanding exactly what is going on in their pages. And because synthetic lets you test any URL, not just your own pages, you can do things like benchmark your site against your competitors. Real user data, on the other hand, can be very noisy. But its comprehensiveness is also one of its strengths. RUM data captures data from every possible scenario, giving you insight into how your site performs in the real world across experiences you may not even have anticipated. RUM data also lets you correlate performance data to business and user behavior metrics, such as conversion rate and bounce rate. This is absolutely critical to sustaining a long-term focus on performance at any organization. Because they play different roles, both synthetic and real user monitoring play a critical role in any company's performance culture. A few years ago, a third form of front-end performance data was released: the Chrome User Experience Report (CrUX). We often get asked where CrUX fits in the monitoring landscape (and we have a support article that covers why your RUM and CrUX data may not match), so here's our take. First, what is CrUX? CrUX data is technically field data. It's collected from real users, not from synthetic tests. But CrUX data is definitely not RUM, or if it is, then it's a very limited form of RUM with some heavy caveats: CrUX data is collected from Chrome sessions only. There is no visibility into what is happening in Safari, Firefox, Edge or any other browser used. We often see sites with 70-80% of traffic that is NOT included in CrUX. This means your mobile Safari traffic is not measured. People often forget just how much of their traffic comes from mobile Safari. CrUX data is also only collected for certain types of Chrome users. Specifically, users who: Enable usage statistic reporting Sync their browser history Don’t have a sync passphrase set Use a supported platform CrUX data also filters out origins or pages that don't meet specific eligibility criteria on at least 80% of their traffic. There's a limited amount of metrics available. At the time of writing, there are only about a dozen performance metrics you can track. You can't correlate CrUX data to your business metrics. There's no way to flag any conversions or get bounce rate data, for example. CrUX is NOT a substitute for RUM If you were shopping for a RUM provider and a vendor told you that they had the perfect solution except: it only recorded data from a single browser and only a few metrics and only from a very particular subset of your users... ...well, that doesn’t sound particularly robust, does it? All of that is to say: while CrUX is sometimes referred to as real user monitoring, it's actually something very different than traditional synthetic and RUM tools. But CrUX data can serve other purposes There are a few situations where CrUX can be helpful. Google uses CrUX data when it factors performance into its search algorithm. Search is a very important consideration for most sites, and this alone makes it worth monitoring your CrUX data. For all the reasons above (the heavy subsetting), in addition to the fact that CrUX data is collected in different ways than a traditional RUM beacon, the data will not line up directly with what you see in other tooling. But tracking CrUX data for your site may help give you visibility into how well your site is optimized for search. CrUX data lets us do competitive benchmarking based on field data. Competitive benchmarking is a very effective way of helping organizations rally around site speed. Traditionally, benchmarking was only possible with synthetic data. CrUX lets us set up competitive benchmarking using some field data as well, which can be a nice way of augmenting existing synthetic benchmarks. Finally, CrUX data can help bridge the gap for companies still working on implementing a full RUM solution. CrUX falls short of being a full-blown RUM solution, but for companies still working on getting RUM in place, CrUX can be a nice way to at least get some visibility into what is happening in the field. Limited visibility is still better than no visibility. Synthetic, RUM, and CrUX data each have their place It's important to keep in mind the limitations of each monitoring tool and what they're best used for so we don’t make the mistake of overlooking valuable insights. Synthetic data is relatively clean and detailed, but not comprehensive in terms of testing the full spectrum of user experiences. Use it to guide your design and development process, and to help spot potential regressions and improvements. CrUX data is field data that is important for search, but it is a very specific subset of your traffic from a single browser with no ability to connect business and user metrics. Use it for real-user competitive benchmarking, and to help you keep on top of how Google views your site from a search perspective. Real user data is comprehensive field data covering the full spectrum of user experiences. As such, it should be your ultimate source of truth in terms of how you're doing. Use it to see how your site performs in the real world and to correlate business and user metrics to performance. To use a somewhat obscure cheese metaphor, CrUX can give you a good initial sniff at performance issues, but it’s no Époisses de Bourgogne. RUM, on the other hand, gives you the full flavour. Does it sound like we're announcing CrUX within SpeedCurve? Not quite, but we are exploring the idea. What are your thoughts? How would the CrUX dataset complement the data you're already getting from SpeedCurve Synthetic and RUM? Let us know at support@speedcurve.com!
Loading the most important resources first is key to improving website performance. However, even when resources are correctly prioritized, servers don't always adhere to the request priorities provided by the browser.
Making sense—and use!—of the new Performance Extensibility API in Chrome DevTools.
Never write your own date parsing library. Never. No exceptions. Never have I ever… So… I’ve written my own date parsing library. Why? Our story begins seven years ago in the year 2018. I made the very sensible choice to adopt luxon as the Date Parsing library for Eleventy. This parsing behavior is used when Eleventy finds a String for the date value in the Data Cascade (though YAML front matter will bypass this behavior when encountering a YAML-compatible date). This choice was good for Eleventy’s Node.js-only requirements at the time: accurate and not too big (relatively speaking). Eleventy has used luxon since @0.2.12 and has grown with the dependency all the way through @3.7.1. Now that’s what I call a high quality dependency! As we move Eleventy to run in more JavaScript environments and runtimes (including on the client) we’ve had to take a hard look at our use of Luxon, currently our largest dependency: 4.7 MB of 21.3 MB (22%) of @11ty/eleventy node_modules 229 kB of 806 kB (28%) of @11ty/client (not yet released!) bundle size (unminified) Given that our use of Luxon is strictly limited to the DateTime.fromISO function for ISO 8601 date parsing (not formatting or display), it would have been nice to enable tree-shaking on the Luxon library to reduce its size in the bundle (though that wouldn’t have helped the node_modules size, I might have settled for that trade-off). Unfortunately, Luxon does not yet support tree-shaking so it’s an all or nothing for the bundle. The Search Begins I did the next sensible thing and looked at a few alternatives: Package Type Disk Size Bundle Size luxon@3.7.1 Dual 4.59 MB 81.6 kB (min) moment@2.30.1 CJS 4.35 MB 294.9 kB (min) dayjs@1.11.13 CJS 670 kB 6.9 kB (min) date-fns@4.1.0 Dual 22.6 MB 77.2 kB (min) The next in line to the throne was clearly dayjs, which is small on disk and in bundle size. Unfortunately I found it to be inaccurate: dayjs fails about 80 of the 228 tests in the test suite I’m using moving forward. As an aside, this search has made me tempted to ask: do we need to keep Dual publishing packages? I prefer ESM over CJS but maybe just pick one? Breaking Changes Most date parsing woes (in my opinion) come from ambiguity: from supporting too many formats or attempting maximum flexibility in parsing. And guess what: ISO 8601 is a big ’ol standard with a lot of subformats. There is a maintenance freedom and simplicity in strict parsing requirements (don’t let XHTML hear me say that). Consider "200". Is this the year 200? Is this the 200th day of the current year? Surprise, in ISO 8601 it’s neither — it’s a decade, spanning from the year 2000 to the year 2010. And "20" is the century from the year 2000 to the year 2100. Moving forward, we’re tightening up the default date parsing in Eleventy (this is configurable — keep using Luxon if you want!). Luckily we have a north star date format: RFC 9557, billed as “an extension to the ISO 8601 / RFC 3339” formats and already in use by the upcoming Temporal web standard APIs for date and time parsing coming to a JavaScript runtime near you. There are a few notable differences: Format ISO 8601 Date.parse* luxon RFC 9557 YYYY Supported Supported Supported Unsupported YYYY-MM Supported Supported Supported Unsupported YYYY-MM-DD Supported Supported Supported Supported ±YYYYYY-MM-DD Unsupported Supported Supported Supported Optional - delimiters in dates Supported Unsupported Supported Supported YYYY-MM-DDTHH Supported Unsupported Supported Supported YYYY-MM-DD HH (space delimiter) Unsupported Supported Unsupported Supported YYYY-MM-DDtHH (lowercase delimiter) Unsupported SupportedFace looking surprised SupportedFace looking surprised Supported YYYY-MM-DDTHH:II Supported Unsupported Supported Supported YYYY-MM-DDTHH:II:SS Supported Unsupported Supported Supported Optional : delimiters in time Supported Unsupported Supported Supported YYYY-MM-DDTHH:II:SS.SSS Supported SupportedFace looking surprised Supported Supported YYYY-MM-DDTHH:II:SS,SSS Supported Unsupported Supported Supported Microseconds (6 digit precision) Supported Unsupported Supported Supported Nanoseconds (9 digit precision) Supported Unsupported Supported Supported YYYY-MM-DDTHH.H Fractional hours Supported Unsupported Unsupported Unsupported YYYY-MM-DDTHH:II.I Fractional minutes Supported Unsupported Unsupported Unsupported YYYY-W01 ISO Week Date Supported Unsupported Supported Unsupported YYYY-DDD Year Day Supported Unsupported Supported Unsupported HH:II Supported Unsupported Supported Unsupported YYYY-MM-DDTHH:II:SSZ Supported Unsupported Supported Supported YYYY-MM-DDTHH:II:SS±00 Supported Unsupported Supported Supported YYYY-MM-DDTHH:II:SS±00:00 Supported Unsupported Supported Supported YYYY-MM-DDTHH:II:SS±0000 UnsupportedFace looking surprised Unsupported Supported Supported Unsupported Inaccurate parsing Face looking surprised Surprising (to me) * Note that Date.parse results may be browser/runtime dependent. The results above were generated from Node.js. A new challenger appears It is with a little trepidation that I have shipped @11ty/parse-date-strings, a new RFC 9557 compatible date parsing library that Eleventy will use moving forward. The support table of this library matches the RFC 9557 column documented above. It’s focused on parsing only and our full test suite compares outputs with both the upcoming Temporal API and existing Luxon output. While there are a few breaking changes when compared with Luxon output (noted above), this swap will ultimately prepare us for native Temporal support without breaking changes later! Package Type Disk Size Bundle Size @11ty/parse-date-strings@2.0.4 ESM 6.69 kB 2.3 kB (min) This library saves ~230 kB in the upcoming @11ty/client bundle. It should also allow @11ty/eleventy node_modules install weight to drop from 21.3 MB to 16.6 MB. (Some folks might remember when @11ty/eleventy@1 weighed in at 155 MB!) Late Additions For posterity, here are a few other alternative date libraries / Temporal polyfills that I think are worth mentioning (and might help you in different ways on your own date parsing journey): Package Type Disk Size Bundle Size @js-temporal/polyfill@0.3.0 Dual 2.98 MB 186.5 kB (min) temporal-polyfill@0.3.0 Dual 551 kB 56.3 kB (min) @formkit/tempo@0.1.2 Dual 501 kB 17.3 kB (min)
What's the point of a performance metric that doesn't align with user behavior – and ultimately business outcomes? Looking at four different retail sites, we compared each LoAF metric for desktop and mobile and correlated it to conversion rate. We saw some surprising trends alongside some expected patterns. We recently shipped support for Long Animation Frames (LoAF). We're buzzing with excitement about having better diagnostic capabilities, including script attribution for INP and our new experimental metric, Total Blocking Duration (TBD). While Andy has gone deep in the weeds on LoAF, in this post let's put the new set of metrics to the test and see how well they reflect the user experience. We'll look at real-world data from real websites and find an answer to the question: How do Long Animation Frames affect user behavior?First, what is a Long Animation Frame? We've covered Long Animation Frames (LoAFs) extensively in a couple of recent posts: The Definitive Guide to Long Animation Frames LoAF product release In short, Long Animation Frames are frames with a duration of 50ms or more. LoAFs can create a poor user experience when a user is waiting for a page to render or when a user is experiencing poor responsiveness, like jank, when interacting with a page. Because LoAFs affect a page's responsiveness, they also affect a lot of the metrics we measure that are based on the user experience, such as Largest Contentful Paint (LCP) and especially Interaction to Next Paint (INP). Support for the Long Animation Frames API is currently limited to Chromium. What are the various LoAF metrics? Much like its predecessor, Long Tasks, we collect a few different LoAF metrics that can be useful. The metrics we'll focus on today are: LoAF Entries – Total count of LoAFs measured LoAF Total Duration – Total duration of LoAFs measured LoAF Work Duration – Total duration of non-rendering phases across all LoAFs LoAF Style and Layout Duration – Time spent calculating style and layout for the frame LoAF Total Blocking Duration (TBD) – Total duration of ALL blocking time for LoAFs How to use correlation charts to see the impact of LoAF metrics on business metrics As we've shown before when doing research on INP, correlating performance metrics with business outcomes is a great way to understand user's sensitivity to that metric. Correlation charts show a histogram of your user traffic, divided into cohorts based on a chosen performance metric. They include an overlay that highlights a related user engagement or business metric – like bounce rate or conversion rate – for each cohort. This lets you quickly spot the connection between performance, engagement, and business impact. Looking at four different retail sites, we compared each of the LoAF metrics for desktop and mobile. The yellow columns represent the cohorts of visits, broken down by the duration or count of the metric. The blue line shows the conversion rate across those cohorts. How do LoAF Entries correlate with conversion rate? The LoAF Entries metric represents the total count of all LoAFs measured. Across the four sites we looked at, the number of LoAF Entries for desktop users shows a correlation with conversion rates. As the number of LoAF Entries increases, conversion rates decline. However, the relationship between the two metrics varies across the four sites we observed. For example, Site A shows a sharp decrease in conversion when the number of entries drops from 0 to 10. Meanwhile, Site B shows a more gradual decrease from 0 to 40 entries. Looking at the same four sites, the relationship between LoAF Entries and conversion rates for mobile users also reflects a correlation. It's interesting to note that the number of LoAFs per site is higher on mobile than on desktop, with up to 90 entries on Site B. This isn't too surprising given the tendency for mobile devices (non-iOS) to use lower-end (slower) processing power. It's also worth noting that the shape of the curve (blue) is not consistent. For Site A, conversion increases in the long tail and Site C decreases gradually after 20 entries. How does LoAF Total Duration correlate with conversion rate? The LoAF Total Duration metric represents the total duration of all LoAFs measured. Total Duration shows a strong relationship with conversion rates on desktop. As seen with mobile for LoAF Entries, Site A is very sensitive – the blue line drops suddenly from 0ms to 500ms. The value for Total Duration is much higher for Site B than the other three sites. Moving on to mobile, the relationship between Total Duration and conversion rate on mobile is very similar across all four sites. It interesting to note that, for all four sites, you begin to see the performance plateau at around 4000ms, when conversion rates level out after a sharp decline. How does LoAF Work Duration correlate with conversion rate? LoAF Work Duration measures the total duration of non-rendering phases across all LoAFs. Spoiler alert! LoAF Work Duration (and LoAF Style and Layout Duration, further below) all follow the same pattern seen in Total Duration above. I've shared the charts here without further commentary. How does LoAF Style and Layout Duration correlate with conversion rate? LoAF Style and Layout Duration (SLD) represents the total time spent calculating style and layout for the frame. As said earlier in this post, this metric follows the same pattern seen in Total Duration and Work Duration. How does LoAF Total Blocking Duration correlate with conversion rate? LoAF Total Blocking Duration (TBD) is the total duration of all blocking time for LoAFs. It's important to mention that Total Blocking Duration is a beta metric. This is mainly because we are still learning about it and observing how useful it is. For both desktop and mobile, TBD showed a correlation with conversions. However, the relationship was varied across all sites. Site A shows conversion improving in the longtail for both mobile and desktop. Site B shows a strong correlation until around 700ms, after which it becomes extremely volatile. Key takeaways Observation 1: LoAFs are worse on mobile This isn't surprising, given that LoAF support is basically limited to Android devices. Lower power + JavaScript = more LoAFs with longer durations. Observation 2: Generally speaking, conversion rate tends to suffer as LoAF metrics degrade There is an overall negative correlation between LoAF metrics and conversions. However, I think it's a stretch to say that ALL of these metrics are good metrics. LoAF Entries on mobile may not have as much impact on conversions as other duration metrics. Similarly, LoAF Total Blocking Duration seems to be a bit of a moving target in the longtail. The remaining duration metrics have a much more consistent and predictable pattern. Observation 3: Consider focusing on LoAF Entries and LoAF Total Duration What I like about this recommendation is that it gives you two distinct types of metric to investigate: a numeric metric (total entries) and a time-based metric (total duration). It's important to look at your own RUM data to draw conclusions about your own sites. This investigation suggests that you should consider focusing on LoAF Entries and LoAF Total Duration when you create your own correlation charts. Observation 4: All sites are different It was a little surprising to see such different results across four sites. Generally speaking, Site A performs better than the others and as a result has a very different conversion curve. Site B shows the most opportunity for improvement and also saw conversions drop off early, long before the 75th percentile for each metric. Observation 5: But the performance plateau was surprisingly similar across sites For Total Duration, the performance plateau started at around the same point – 4000ms – for all four sites. This means that reducing Total Duration from, say, 8000ms to 6000ms might be a worthy goal but it may not move the needle on conversions. (But you should still consider this a great achievement as one of the steps in your performance optimization journey!) Measuring LoAF metrics in SpeedCurve It's easy to create your own correlation charts in SpeedCurve. If you add your own conversion data, you can easily recreate the LoAF correlation charts above. Alternatively, you can create a correlation chart with bounce rate out of the box. You'll see LoAF data throughout your SpeedCurve dashboards. Take a look at the most recent product release for details.
Learn how a strategic SEO content marketing plan and technical website migration helped a Colorado-based money lender generate 77% more website traffic.
Learn about all the latest Google algorithm updates and their effects on organic search performance with this comprehensive guide.
Attract and convert more customers at a lower acquisition cost with these local digital marketing strategies from Inflow.
We show you how we used E.A.T. SEO strategies to improve our eCommerce client’s site, leading to a 300% increase in organic revenue.
Was your website affected by the most recent Google update? Follow our step-by-step process to find out.
Across every layer of the company, business people get too much data. Hunting expeditions are launched to locate what’s relevant and useful. CMO scorecards have impressions! Agencies are using CPMs, or worse “Traffic” as primary KPI. A Bank’s CMO has NPS on their dashboard – NPS is sourced from everything a company does AFTER Marketing’s […] The post Kill Data Pukes: Split KPIs, Diagnostic Metrics & Influencing Variables. appeared first on Occam's Razor by Avinash Kaushik.
If a conversation is occurring about Data and Analytics, chances are high that it is about Metrics. About how abhorrent Vanity Metrics are. About the marginal value of Activity Metrics. About how crucial a focus on Outcome Metrics is. About Metrics for Dashboard – NO! Only KPIs for Dashboards. About the difference between KPIs and […] The post Marketing Analytics: Methodologies Trump Metrics! appeared first on Occam's Razor by Avinash Kaushik.
The most important lesson in Marketing is also the simplest: Successful Marketing requires incredible Creative, and sufficient Media weight. Simple, no? Yet, in my experience, it is rare that either part of that equation is understood or optimally executed. This disappointing reality limits the success for your Performance Marketing campaigns, and is death for your […] The post Marketing: Win Before You Spend: Pre-Test Creative + Media Sufficiency. appeared first on Occam's Razor by Avinash Kaushik.
The data we deal with has such immense complexity built into it (across metrics, methodologies, sources), the business itself is so complex (no more just run ads on TV, wait for people to walk into our stores), any analyst has to strive to simplify complexity every day. Hence, one thing our community has in common […] The post Data Visualization Tips & Tricks: What Not To Do! appeared first on Occam's Razor by Avinash Kaushik.
“Let’s all focus on a single metric, a True North for the entire company!” This is an understandable sentiment from Extremely Senior Leaders (ESLs). There are so many data pukes (sorry, “dashboards”) running around the organization, employees face such difficulty in being able to be smarter. Or, worse, Teams/Agencies can cherry-pick and show “impact.” Hence, […] The post Marketing Analytics Mistake #1: Efficiency Without Effectiveness! appeared first on Occam's Razor by Avinash Kaushik.
Managing media is a really difficult task if you try to do all of it yourself, especially if the media comes from other sources. The file can be submitted in any state and size, but what if you need something really specific? You can code it all yourself or you can use an awesome service […] The post Easy way to upload, transform and deliver files and images (Sponsored) appeared first on David Walsh Blog.
The ability to download media on the internet almost feels like a lost art. When I was in my teens, piracy of mp3s, movies, and just about everything else via torrents and apps like Kazaa, LimeWire, Napster, etc. was in full swing. These days sites use blob URLs and other means to prevent downloads. Luckily […] The post How to Download a YouTube Video or Channel appeared first on David Walsh Blog.
curl is one of those great utilities that’s been around seemingly forever and has endless use cases. These days I find myself using curl to batch download files and test APIs. Sometimes my testing leads me to using different HTTP headers in my requests. To add a header to a curl request, use the -H […] The post How to Add a Header to a curl Request appeared first on David Walsh Blog.
CSS selectors never cease to amaze me in how powerful they can be in matching complex patterns. Most of that flexibility is in parent/child/sibling relationships, very seldomly in value matching. Consider my surprise when I learned that CSS allows matching attribute values regardless off case! Adding a {space}i to the attribute selector brackets will make […] The post Case Insensitive CSS Attribute Selector appeared first on David Walsh Blog.
Working on a web extension that ships to an app store and isn’t immediately modifiable, like a website, can be difficult. Since you cannot immediately deploy updates, you sometimes need to bake in hardcoded date-based logic. Testing future dates can be difficult if you don’t know how to quickly change the date on your local […] The post How to Set Date Time from Mac Command Line appeared first on David Walsh Blog.
The story of NaughtyDuk©'s quality-over-speed mindset, their work with top entertainment brands, and the open-source tools they’ve built along the way.
Discover how to create a subtle, interactive WebGL background with Bayer dithering in this quick tutorial.
How procedural modeling and a few smart abstractions can turn complex 3D design into a simple, intuitive web experience.
A hands-on walkthrough from Eduard Bodak on crafting scroll-driven and interactive animations for his portfolio.
A glimpse into the early work, process, and inspiration of Ivor Jian, a self-taught designer and developer blending precision with expressive web experiences.