Plain Text Reddit
Top posts of the week for r/QualityAssurance
I’ve been helping with QA processes on a project, and one area I’m trying to get better at is API testing + documentation. There are so many tools out there, and I feel like every team has their own approach depending on whether they want GUI-based testing, CLI automation, or something self-hosted. Here are some that I’ve come across so far: Postman → Still the most widely used, lots of tutorials and integrations. Hoppscotch → Open source, very lightweight, works in the browser or can be self-hosted. Bruno → Interesting because collections are just plain text, easy to keep in Git. Hurl → CLI tool for testing APIs using simple text files — clean for automation. Insomnia → Good alternative to Postman, nice UI. Apidog → Similar to Postman but with a built-in docs feature and offline support, which some QA workflows may find useful. Thunder Client → VS Code extension, convenient if you’re already coding/testing inside the editor. SoapUI → More enterprise-y, but still useful if you’re testing SOAP or complex protocols. For QA folks here: Which of these (or others I didn’t mention) actually stick in your workflows? Do you separate tools for testing vs documenting APIs, or do you prefer one tool that does both? Curious to hear what’s working for different teams
As a tester, I know that the most costly problems rarely come from failing a single test case, they come from failing to anticipate an entire category of risks. On several occasions, PMs and POs have asked me why one tester didn’t report a bug that another did. The answer is usually simple but profound: the first tester probably didn’t imagine that the system could fail in that way. In testing, we rarely have all the information. Most of the time, we build test cases and strategies from scratch, guided more by our experience and ability to imagine scenarios than by detailed documentation. An excellent tester thinks technically, critically, practically, and in this case, I’ll emphasize CREATIVELY. How are you fostering or applying this type of thinking in your projects?
Hi all, I’ve been working in **manual testing** and now I’m planning to move into **automation testing**. Since my coding knowledge is still at a beginner level, I’m trying to figure out the best tool to start with. On one hand, **Selenium** has been around for a long time, has a huge community, and is still mentioned in many job descriptions. On the other hand, **Playwright** seems to be the newer, modern option that many people recommend for speed and ease of use. From a **career standpoint**: * Which tool would give me a better foundation as someone new to automation? * Is Selenium still in higher demand, or is Playwright becoming more valuable in the job market? * For long-term growth, which path would you suggest? I’d really appreciate advice from anyone who has made the transition from manual to automation. Thanks in advance 🙌
Hi everyone! I want to start a career in **QA** from scratch. I’m still at the age where I could be in university, but honestly, I don’t want to pursue the degree I’m currently studying, and I also don’t want to ask my parents to pay for another full degree. (I am from Mexico btw) That’s why I’m looking for something that could help me find a job relatively faster and more affordable than a university degree. I know it’s not something immediate or as easy as places like TripleTen try to make it sound lol, but I believe QA could give me better chances to enter the job market and grow professionally. I’m not bad with tech: I’ve built PCs (including my own), I have some basic programming knowledge, and my English level is around B2. So I’d like to ask those of you with experience: * Where do you recommend I start in QA? * Is there any specific course that’s actually worth it? * What path or specialization would you suggest to focus on? * What advice would you give to someone who wants to land a job in QA as soon as possible? Thanks for reading :)
Chat, Is my career over?? I have 4+ years of experience in software testing, worked on both manual and automation in selenium with java but I have more than 1 year of career gap and now I'm not seeing many job openings and even if I apply I'm not getting shortlisted. Even if I apply for Manual, they are looking only for testers with particular domain knowledge. How should I proceed? What are my options?
Hey everyone, I have my interview for an **Automation Testing Consultant** role at Deloitte USI on Monday! I'm super excited and a little nervous. I have about **4 years of experience** in the field, and I'm looking for some last-minute tips on what to focus on. For a 4 YOE professional, what are the key areas I should be sure to cover? I'm expecting some questions on automation frameworks and tools, but what else should I prepare for? Specifically, I'm thinking about: * **Technical skills:** Which automation tools are most important for Deloitte? * **Behavioral questions:** What kind of scenario-based or behavioral questions should I anticipate? How can I best use the STAR method to answer? * **Case studies/scenarios:** Should I be ready to walk through a project from start to finish? Any advice from people who have interviewed for a similar role at Deloitte USI, or are working in the same post, or in consulting in general, would be greatly appreciated. Any and all tips are welcome! Wish me luck! 🙏
Hey QA community, I'm writing this post because I'm at my wit's end and honestly, feeling incredibly deflated. For almost a year and a half now, I've been relentlessly trying to break into a QA role, specifically entry-level, and it feels like I'm hitting a brick wall made of solid steel. I'm not even getting noticed; just a constant stream of application denials. I've been self-teaching and studying diligently, focusing on automation tools like Playwright and Selenium, and getting a grasp of basic JavaScript and Python. I really enjoy the problem-solving aspects and the idea of ensuring quality in software. Currently, I work as a Product Support Analyst at a software company remotely. I had hoped this would be my foot in the door. I even managed to shadow with our internal QA team for a while, which was fantastic experience, but that's unfortunately stopped as they've become extremely busy, especially after recent company layoffs. Heck, I even applied for a QA position and was denied since it was not the product I supported as my current role. There's no room for internal hires on the QA team right now, which stings because I was really hoping for that internal transfer opportunity. The constant rejections are incredibly discouraging. I'm not making enough in my current role to comfortably support myself, let alone even think about taking a vacation or having much financial breathing room. I'm just so tired of feeling like I'm running in place. I'm not sure what to do now. Do I keep pushing? Do I pivot entirely? What am I missing? Any advice, insights, or even just shared experiences would be incredibly helpful right now. I feel so lost and burnt out from this job search.
I currently use Jenkins at my work for nightly regression test run and I like it. But I'm wondering if Github Actions is better. I use Github Actions for my small personal Github projects for running simple tests triggered when a PR is created and when merging it to main, but I only use basic features as a developer, not as a QA. At a glance Jenkins looks more flexible and could handle more complicated workflows, but my experience with Github Actions is not enough to judge. I'd like to hear how other QA folks are doing.
I've been thinking a lot about how AI testing tools that leverage existing ecosystems will probably win in the long term - rather than fully autonomous tools that "take over" your QA. I think that the cost of controlling your AI agents, like by owning the test code they create, will pay off long term. I put my thoughts on why choosing a testing strategy that doesn't involve autonomous agents is the right way to go. [https://endform.dev/blog/rethink-your-ai-e2e-strategy](https://endform.dev/blog/rethink-your-ai-e2e-strategy)
I’m completely new to this field, so please don’t judge. I recently graduated and I’m currently learning both manual and automation testing in depth. I see opportunities for both, but I’ve also heard people say that manual testing might become obsolete in a few years. So, would it be a good idea to apply for jobs that are purely manual, or should I focus more on automation and apply for those roles instead?
What are the best sites or youtube playlists to learn Playwright? I have some background in Javascript but have mostly been using low-code test automation software and currently looking to upskill
Hi, I have 7+ yoe starting from manual to automation test engg, I always wanted to be in Development but things didn’t went fine. I know its long time being in QA but my primary thing to switch is 1. salary, 2. quality of code which i am not getting in QA role have anyone shifted to SDE? what should be the expectation? And will I have to join at lesser pay and role? my current package is less than 25lpa
Hi everyone, I’m working as a QA with experience in both manual and automation testing. While I’m fairly comfortable on that side, I’m new to performance testing and would like to build a strong foundation. I’m looking for a beginner-friendly course (online preferred) that covers the basics properly, something that helps me understand the concepts, tools, and best practices so I can get started with confidence. If anyone has suggestions or personal recommendations, I’d really appreciate it. Thanks in advance!
Hi all, I am currently at a crossroads with my career and seeking advice/perspective. I am 31 years old with 10 years of QA experience. My experience has primarily been manual/functional testing, with some automation and ETL testing. I also have experience in project management, leading QA and testing for large-scale projects. After being let go from my previous role as a QA Lead (thanks DOGE), I now lead testing for an ERP project. Nothing about it is technical; it's really project management (schedule management, risk mitigation, strategy, etc.). I prefer technical roles doing hands-on work, but wonder if I should stay on the project management path (eventually getting a PMP). If I don't go the PM route, I was considering transitioning to a Cloud or data-focused role. I love ETL/data pipeline work, but I don't have the tech experience to land a job. It seems like manual roles have become more difficult to land, or manual and automation roles are being combined into one (and I don't have the most automation experience). As I said, potential career transitions would be: * Project Management * ETL Engineer/Tester * Cloud * Automation Testing I'm curious to know if anyone here successfully transitioned into one of these spaces and would love to hear from other QA professionals.
**Hey everyone!** A few days ago, I got a call from someone telling me that my CV was “approved” by the managers of company X (where I applied about 1–2 weeks ago). She asked if we could schedule an online meeting of about 1h 30min. I agreed, but to be honest, I have zero experience in quality assurance and I’m not really from that field (nothing in my CV shows QA experience, just some adjacent areas). The problem is that the interview is tomorrow and I honestly have no knowledge of QA. I asked ChatGPT to give me a short summary of the role and some info about the company. Still, what do you think would be useful for me to study before tomorrow? (mentioning that this is my very first interview ever). P.S. The position is listed as *“Working Student for Trinity Development & QA team.”*
Hope you all are staying strong in the circumstances. Fortunately, i have an immediate position in my Org for a strong SDET with Java, Maven, CI/CD and Node understanding. DM me will try expedite this.
Hey there, performing testing on fintech platform, need to add the credit card details to test of Visa/ AmEx /MasterCard/ Chinaunion pay and more... how do you test such scenarios ?
new to the field and trying to learn. Also generally how is game test qa viewed compared to other tech qa?
Hi everyone, I was always interested in coding during my college times, but due to some circumstances, I started my career as a manual test engineer. I’ve got almost 5 years of experience, mainly in manual web app testing, with a bit of exposure to web and API automation. Recently, I got a great opportunity to transition into an SDET role (in my current org. only). Now, I know many SDETs also perform a good chunk of manual testing, but that’s not the case for me. I’m working full-time on automation. However, it’s not the traditional web/API automation most SDETs usually work on (if I’m not mistaken). Instead, my work is focused on **LLM evaluation and automation.** To give some context: we’re building an AI agent. My role is to build a test framework to evaluate the responses of the same. I’m handling this from scratch, and it’s been exciting to design and implement such a framework myself. Learning is so great and I'm enjoying my work and love it too, and I've become a very valuable at my company too. Here’s where I’m a bit stuck: * Should I continue down this niche path of LLM evaluation/testing automation? * Or should I pivot back towards more “standard” web/API automation since that’s what most job descriptions seem to ask for? I feel a little insecure because a lot of SDETs have strong experience in web/API automation, while I’ve spent most of my career in manual testing. Only in the last 3–4 months have I been working full-time on automation — but in this very niche space. **My questions:** * How do you see the market for LLM & AI-Agent testing/evaluation automation evolving? * Do you think there will be more demand for such roles in the near future, or should I invest time in getting stronger in the traditional SDET skills (web & API automation) to stay market-relevant? Would love to hear your perspectives!
I’m currently a management consultant who’s been working on a rather large IVR. The QA resources of the company are some of the best people I’ve met; but it’s pretty clear they’re deeply engrained in manual testing. I’ve been part of a small team to implement Botium, an automation test tool with tons of capabilities. The resources barely use the tool despite the fact that their superiors purchased this tool and put them through several months of training. I wanted to get a QA resources perspective on this: 1. Why are some people apprehensive to using the automation tool? 2. Is it normal for a QA resource to have expert knowledge in the APIs? Welcome any thoughts or questions to. Thanks!
Hey all! Wrote a new blog about QA, curious to hear your thoughts. [https://getdecipher.com/blog/the-reproducibility-paradox](https://getdecipher.com/blog/the-reproducibility-paradox)
I keep getting pushed by my peers and sometimes PM, to ask developers why [x] number of bug tickets are not resolved. The tickets statuses clearly say “Open”, so why don’t the PM themselves ask developers, and the developers themselves take responsibility for them ?
Afternoon, I’ve been in first-line support for 2½ years. You don’t need to be technical in my role, just have a good knowledge of the product. Due to a couple of problematic software updates, I was asked to join a mob testing group with the QA team about once a month. I’ve been really helpful in these sessions, often spotting issues that others missed during mob testing because I have a good understanding of user experience. Well, long story short, the junior QA is leaving, and their manager thinks I’d make a great tester and wants me to apply and attend an interview. I’m unsure if this is the right move. I worry that the job might get dull and unchallenging, and without technical skills, I could end up stuck in manual testing. I’d love to hear from people in QA, in the UK or elsewhere — is the job interesting, has being in QA helped you build a good career path, and how important is upskilling to progress in this role.
Hi everyone, I want to introduce some API tests into our workflow. All backend code exists within Nest and Node, so I think the best choice is to follow that path and write the automation framework for APIs in TypeScript as well. While I know that Playwright is the best option for UI tests, what about API/backend end-to-end testing in general on TypeScript? What packages should I use? Playwright does have API testing, but it seems like overkill to bring Playwright into the backend code (no UI tests in sight, unfortunately). Maybe Jest + Axios or SuperTest? I’m feeling a bit lost, so I hope you’ll share some of your experience
Hey everyone, I’m a 23-year-old QA Engineer with 1.5 years of experience and a strong passion for automation. I could really use some advice from the community. Last October, I built a proof-of-concept for automation using Playwright at my firm, and the company has **now** decided to move forward with my approach. I’ve been tasked with creating a **Playwright framework (with TypeScript)** for one of our products. Once the framework is ready, I’ll also need to migrate around **500 test cases from WebdriverIO to Playwright**. Here’s what I’ve accomplished so far: * Implemented Playwright with a custom login fixture * Using **Page Object Model (POM)** as the design pattern * Integrated the framework with **Azure DevOps pipelines** * **Next step:** Adding ESLint to catch errors early The product I’m testing is a **React-based work management platform**. Since no one else at my firm has prior Playwright experience, I want to make sure the framework I build is **scalable, robust, and easy to maintain**. The main reason I chose Playwright for my POC was that the previous Selenium framework had poor code quality and was never properly maintained. I’d love your advice on the following: * Best practices to follow when building a Playwright framework * Recommendations for handling a large-scale test migration (500+ cases) * Common pitfalls to avoid in framework design * How to ensure the framework is easy for others to understand and work with I’m eager to learn and improve, so I’d really appreciate any suggestions, tips, or guidance you can share. **Note:** I’m strictly forbidden from using any libraries that rely on OpenAI.
Would you trust an AI assistant (like a ChatGPT-style bot) to modify or update your test automation code?
Hi guys, I’m facing a pretty big challenge and need your insights. The QA team has a legacy Selenium/Java test suite that’s been built over 3–4 years. The main contributors have left. It has around 1.5k test cases written in Cucumber style. Here’s the situation: * Runs once per day, in parallel (chunks by tag) * Execution time: \~6–7 hours * Extremely flaky: \~30–40% of tests fail on every run * Not part of the delivery pipeline * Dev team doesn’t trust it at all because of the flakiness * Current QA engineers barely contribute — only 1 or 2 check it regularly, and they don’t have enough time/experience to stabilize or refactor it So right now, it’s essentially a giant, flaky, slow, untrusted test suite. **My question:** If you were in my shoes, what would be the *smartest move* to get the best ROI? Do you try to rescue and stabilize this legacy monster, or is it better to sunset it and start fresh with a new strategy (smaller, faster, reliable tests in the pipeline) using more modern stack like PW+JS?
Is it worth to go toward Business analyst? I have total of 5 years automotive manual testing experience but since I lost my job in 2024 feb I still can’t find any. I even took a test automation course using cypress and JavaScript waste of my $4k don’t waste money to these automation courses. My current company has a program where they train you to be a Business analyst and hire you as an associate ba for 9 months then you be come business analyst 1 2 and so on . Right now ima underwriter 2
Hi, I'm a Quality Assurance Engineer with over 8 years of experience, and recently I started to feel that I should move to something different... I'm a bit bored by QA and, to be honest, I don't find test automation fun anymore. I would like to try something related to a business or product instead. Do you think switching to Business Analyst, Project Manager or Product Owner might be a good idea? Do these roles still have potential in the era of AI?
I’ve been getting deeper into DevOps workflows and one frustration I keep hitting is that quality gates rarely reflect how *our* team actually writes code. They’re static rules, written once and rarely updated. They don’t evolve with the stack, and they definitely don’t capture the little conventions we care about. I saw a different approach recently where every PR decision feeds back into the quality gate automatically. That clicked for me. So, I am just tryna understand how do you all approach this? Do you rely on static configs, or do you have ways to let standards evolve with the team?
Basically, I open xscan and it is automatically using ‘select on screen’. I attempt to close it, but clicking anything except the SUT UI to add controls is unresponsive. I literally cannot click anything, it just doesn’t do anything. I HAVE to use task manager to forcibly quit the application every time. Yes, my resolution and zoom are set to 100%. Maybe I’m just frustrated, but I am failing to see the value in this. The tutorials are mediocre (why the hell do I have to watch the whole video to rewind?????), there’s virtually no support online, the software itself is buggy, slow and unreliable, and I can already do everything much easier with playwright. Setting it up was even a nightmare given the convoluted mess of instructions on their website. I cannot imagine on-boarding an entire team. Someone please help me understand why anyone would use this. It seems grossly overhyped for what it does.
At our company (which builds a SaaS app), we have a solid team of engineers and testers, but we're looking to hire another QA. At least one, maybe two. Hiring QAs has been hit or miss in the past. We've used TestGorilla for general aptitude and "attention to detail" type tests. Those are okay as far as they go, but we've learned with other teams (like engineering,) that live tests are always the best way to suss out the top candidates. In other words, do an hour video-interview where we ask live coding questions and see what code they wrote and how they write it (how they think, talk through it). For QA, that's a bit more difficult. We were planning to spend dev time coding up and releasing a product where (in a QA environment) we could point new candidates to use the feature which has some hidden bugs or odd UX patterns and see what they find and how they document it. It occurred to us that there are probably apps out there for doing this - ie. offering QA tests, but for finding live bugs on screen. I did some searching and was told (by Claude) that these apps offer such testing: - Test.io (now Applause) - HackerRank - QA Wolf - Testlio - Codility I looked at a couple of those and was left wanting. Not sure that some even do that at all. Does anyone here know of a good one?
So, mutation testing; for the first 4 of my 5ish years of being a QA / SDET, I'd never heard of it. But injecting purposeful faults into developer's code to find bugs, sounds just about as fun as it gets. I've never tried it in an official process sense, but it seems really powerful. 100% it would have caught some major bugs earlier in the systems I work on. Does anyone use it regularly? If you do, is there a process / tool that you follow, or is it more like exploratory testing with fault injection? What are people's thoughts on it?
Howdy, I just got a new QA job. I'm sort of new, but not completely unfamiliar with it. I was previously a game dev and jumped ship. I had a meeting with my manager about something and he basically brought up the fact that in order to test, I should be looking at the developers code to try identify potentially defective cases. I.E the developer doesn't check the type of a value before assigning it to a field. I'm very new to the job, so I didn't want to get into an argument. I basically asked if the developers do code reviews, and he said that he was usually the one who did it, but didn't have time, so now he was hoping I would be the one to do it. It's annoying because it confuses my idea of the QA process. Normally we I just test AC, creating test cases for AC. But having to check code means what, I raise a bug ticket because some guys code doesn't handle a case? It seems a bit stupid to me. Devs should be doing code review not me. Any feedback on this would be great. I have some leverage since I was hired to manage process, but also perform testing. Now I feel like I'm getting more stuffed dumped on me that I won't have time to do. Thanks!
Hello everyone, I want to switch my career to salesforce QA. I have no prior knowledge about QA and salesforce since I work as a quality manager in a construction company but I want to enter the IT world. I would like to know how I can start this new journey, what courses to take and what to fo us on during my learning.
I've sent my new resume out to what feels like a lot of places and have either gotten a rejection letter or no reply. Prior to the new resume when I was looking a few years back I felt my success rate was much higher, would get phones interviews and move into later rounds. Would anyone with experience hiring or recently successfull at snagging a role be willing to have a look at my resume? For some more context I used advice from the resume sub; used that ATS scoring system limited it to two pages etc ...
Hey everyone, I’m starting my journey into **Selenium automation testing,** and I want to invest my time in the *right* course. There are numerous options available on Udemy, Coursera, Test Automation University, LinkedIn, Edureka, and more. I’d love to hear from this community: * Which course actually helped you **build real frameworks** (not just toy examples)? * Did you find **Java-based courses** more useful? * Are there any free resources that are just as good as paid ones? My goal is to go beyond basics → get strong enough to work on frameworks, CI/CD integration, and automation in real projects. Looking forward to your suggestions 🙏
Hi everyone, I’m a fresher who has recently joined an MNC with no prior experience or knowledge in software testing or automation. Currently, I’m undergoing training in this field, focusing on tools like Selenium, Java, JUnit, TestNG, Postman, and Rest Assured. To be honest, I’m still uncertain whether this is the right path for me, but since I’m locked into this role for at least one more year, I want to give it a fair shot. I’d love some guidance from people in the industry. Here are my key questions: 1) What are some of the top certifications or tools I should focus on learning to be industry-relevant? 2) What technologies or tools are becoming more popular or essential in the world of automation testing? 3) I’ve been thinking about possibly combining automation testing with cybersecurity. Do you think that pursuing knowledge in both domains would open up more career opportunities in the future? I’m open to all advice and I’d really appreciate any insights or suggestions on how to best navigate this early stage of my career. Thanks in advance!
Hi all! For the past months I have started working in the video games industry. I feel the so-far practices are rather inefficient, and what’s even worse (in my opinion): not really transparent/well showcased. The smoke tests and all kind of checks are kept within Google spreadsheets and they grow by each release as the business model is based on selling expansions that are add-on content onto the base game. Some check lists have huuuundreds of entries, which makes the overall testing result and coverage difficult to assess by other domains which might be interested (eg Producers). I have been trying to convince the team to use Xray (jira add-on) to create test cases per feature - we already use Jira for our project management and bug tracking; so I thought that Xray feels like a natural choice. Our documentation is… nonexistent. We have Confluence in our company, but literally NO documentation is made on the ongoing basis and whatever relevant content was made years ago. Do you have some solid suggestions, tips, recommendations when it comes to tools or processes we could look into and assess whether they could be work for us? Many people in our team seem to believe that games are unique and virtually no common software QA practices - which I disagree with as I come from other software companies also working on inherently complex products, so I really want to audit what we have against some very solid ideas if yours! Apologies for such open ended question, but I’m genuinely open to look into any and all suggestions! :)
Hi guys, I've been working in IT professionally for 20+ years, as Help Desk, then Sys Admin, IT business owner, but I'm looking to transition into QA and possibly even development. I've been taking a QA course and feeling more comfortable. My question is, have any of you successfully transitioned from IT "support" over to QA? If so, what was your strategy? How can I go about landing my first QA gig without any real world experience? Thanks!
Hey everyone, If you are starting a new role as a QA Manager next week. You will be leading an established team that has a mix of manual testers and automation engineers. What would be your game plan for the first month? * **Week 1:** What are your non-negotiable "must-do" actions? * **First 30 Days:** What key information would you aim to have gathered? * **Red Flags:** What's a subtle red flag you would be looking out for? Thanks in advance for sharing your wisdom!
Hey r/QualityAssurance, Looking for advice on presenting RFP solutions to clients, specifically around automation and AI in QA. How do you make your proposals stand out from the typical “we do Selenium/Playwright and have CI/CD” responses everyone else is pitching? What I’m struggling with: - Differentiating our AI-powered testing approach from standard automation - Showing real value beyond just “faster execution” - Avoiding the same buzzwords everyone uses Questions: 1. Do you focus more on technical demos or business outcomes? 2. How do you handle clients who are skeptical about AI in testing? 3. Any creative ways to showcase intelligent test generation vs traditional scripting? Really want to move beyond the commodity automation pitch and show genuine innovation. The market’s getting saturated with “AI-powered” claims that are just basic ML. Any war stories or lessons learned would be awesome! Thanks in advance 🙏
I've just published Mocky Balboa https://docs.mockybalboa.com/. A tool for mocking server side network requests in your fullstack frameworks. Think Next.js server components, Astro, Nuxt etc. The project was inspired by a concept I build out a couple of years ago that has since been battle tested. The initial concept wasn't portable, and was heavily tied to Playwright and Next.js. Mocky Balboa is framework agnostic with first class support for major frameworks. You don't need to run any proxy servers, or define static fixtures. It takes a declarative approach where you can create your mocks at runtime directly within your test suites. Here's an example code snippet from the Playwright docs page. ``` import { test, expect } from "@playwright/test"; import { createClient } from "@mocky-balboa/playwright"; test("my page loads", async ({ page, context }) => { // Create our Mocky Balboa client and establish a connection with the server const client = await createClient(context); // Register our fixture on routes matching '**/api/users' client.route("**/api/users", (route) => { return route.fulfill({ status: 200, body: JSON.stringify([ { id: "user-1", name: "John Doe" }, { id: "user-2", name: "Jane Doe" } ]), headers: { "Content-Type": "application/json" }, }); }); // Visit the page of our application in the browser await page.goto("http://localhost:3000"); // Our mock above should have been returned on our server await expect(page.getByText("John Doe")).toBeVisible(); }); ``` I'd love feedback and I hope others find this a useful elegant solution to a recurring problem as more and more we're moving towards server side rendering in modern frameworks.
Hi Folks, Our organization is shutting down operations and I’ve been given my last working day in the first week of September. I’m currently looking for QA opportunities. I have 8+ years of experience in both manual and automation testing, with strong expertise in: • Python + Robot Framework • Mobile testing • Backend and API testing • Database testing Remote opportunities would be highly appreciated. Please let me know if you’re aware of any relevant openings. Thanks in advance! 🙏
I have **3.8 years of experience in QA (mostly Manual)** and feel a bit stuck in my career. Here’s my salary timeline: * **2021 – 2022:** First job – ₹11k/month – 1 yr 4 months – No increment. * **Feb 2023:** Switched to ₹30k/month. * **Dec 2023:** Laid off. * **Jan 2024:** Joined a new company – ₹36k/month. Stayed a year. * **Feb 2024:** Moved again – ₹41k + 10% variable (\~₹44k). * **Aug 2024:** Increment to ₹45k base + variable (\~₹48k/month). Now, my company is offering a **2-year bond** with the following: * ₹52k base + 15% variable (\~₹59.8k/month). * ₹7k/month invested in mutual funds (paid after 2 years). * Must stay until June 2027. Concerns: * I travel **60 km daily** to the office (huge time drain). * Most of my experience is **manual testing**, though I have API & Performance Testing certifications and am learning automation. * I want to move toward **Automation/SDET roles** with better growth (₹10–12 LPA by 2027). **Should I sign the bond for financial security or skip it and focus on upskilling + switching to a better automation role in 2025?** Would love advice from those who’ve been in a similar spot.
Hi! I was shortlisted in a startup software company as a QA Tester. Considering it's a startup company, there are only 1 QA vacant, and they're expecting to do both manual and automation testing. During my internship, we do not deep digger to Manual testing practices and focus on Automation testing instead. May I know what and how do you start your Test Plan Management? Included in your Test Documentation? Can you give some of your strategy in doing Manual testing?
I worked on Fiverr as a QA tester and did game testing, including helping a team build an in‑game moderation system. The market's been rough, no orders for around 6–7 months and savings are running low. Most listings I'm seeing now expect automation plus some AI-assisted testing knowledge. Looking for a clear, realistic path to get job-ready fast. If you've recently landed a first automation role or transitioned from manual, what actually moved the needle for you, specific projects, certs, or contributions? Links to similar posts or checklists appreciated. Thanks!
Has anyone used an AI tool that auto-generates Selenium scripts from plain English test cases? I’m curious how accurate these are for medium complexity test cases (like 50+ steps).