Convert Doctor IDs To Dashless UUIDs In APIs A Comprehensive Guide
Hey guys! Let's dive into a crucial task: converting Doctor IDs to dashless UUIDs in our APIs. This guide will walk you through the process, ensuring our API is clean, efficient, and adheres to best practices. We'll cover everything from setup to testing and even commit message guidelines. So, buckle up and let's get started!
🚀 Setup Prerequisites
Before we jump into the code, we need to ensure our development environment is correctly set up. This is super important, so pay close attention!
⚠️ CRITICAL: pnpm is the Key!
Listen up, folks! This project is specifically designed to work with pnpm as the package manager. If you try using npm or yarn, you're gonna run into issues. Trust me, save yourself the headache and stick with pnpm. Think of pnpm as your trusty sidekick in this adventure.
1. Install pnpm Globally (if not already installed)
First things first, we need to make sure pnpm is installed globally on your system. If you haven't already got it, here's the command to get it done:
npm install -g pnpm
This command tells npm (Node Package Manager) to install pnpm globally, which means you can use it from any project on your system. It's like giving pnpm a global passport to all your projects.
2. Install Project Dependencies
Next up, we need to install all the project's dependencies. These are the libraries and tools our project needs to run smoothly. Navigate to the project directory in your terminal and run:
pnpm install
This command tells pnpm to look at the package.json
file in your project, which lists all the dependencies, and then download and install them. It’s like stocking up your toolbox with all the right equipment for the job.
3. Verify Setup by Running Tests
Okay, we're almost there! To make sure everything is set up correctly, let's run some tests. This will give us peace of mind that we're starting on the right foot. We have different tests for different parts of the project, so let’s run them all.
# For API components
pnpm nx test api
# For PWA components
pnpm nx test web
# For library components
pnpm nx test domain
pnpm nx test application-api
pnpm nx test application-shared
pnpm nx test application-web
pnpm nx test utils-core
Each of these commands tells pnpm to run the test suite for a specific part of our project. We've got tests for the API, the Progressive Web App (PWA), and various library components. If all tests pass, you'll see a green light, and that's what we want!
âś… You're ready to work on this issue once these commands run successfully!
If all those commands run without a hitch, then congratulations! You've successfully set up your development environment, and you're ready to tackle this issue. High five!
Comprehensive Plan Description
Alright, let's get down to the nitty-gritty of what we need to do. Our main task is to convert Doctor IDs in the API to dashless UUIDs, like this: 1d2773330e144f61bfa47ff557d372be
. Think of it as giving our Doctor IDs a sleek, new, and standardized makeover. This is all about making our system more consistent and easier to work with.
The primary goal here is within the apps/api
section of our project. We need to go through the Doctor IDs and ensure they are in the dashless UUID format. This means removing any dashes that might be present in the UUIDs. But we're not stopping there! We need to be thorough. It’s like spring cleaning for our API.
Key Task: Dashless UUIDs Everywhere
Our mission, should we choose to accept it (and we do!), is to verify that all endpoints in the API are using dashless UUIDs. If we find any endpoints still clinging to the old dashed format, we need to update them. This is crucial for consistency and will help prevent headaches down the road. Consistency is key, guys! We want our API to speak one language, and that language is dashless UUIDs.
Why Dashless UUIDs?
You might be wondering, “Why are we doing this?” Well, dashless UUIDs are more compact and often easier to work with in various systems and databases. Plus, they look cleaner! It's all about streamlining our data and making it more efficient. Think of it as decluttering – a tidy API is a happy API.
This isn't just about changing the format of the IDs; it's about ensuring our entire API ecosystem is aligned and optimized. We're talking about a holistic approach to API cleanliness. We’re not just slapping a coat of paint on the wall; we’re renovating the whole room.
Acceptance Criteria
To make sure we're all on the same page, let's define the acceptance criteria for this task. These are the checkpoints we need to hit to consider this job well done. It’s like our project to-do list, and we love checking things off!
Implementation
- [ ] All features described in the plan are implemented: This is the big one. We need to make sure we've covered everything we set out to do, from converting Doctor IDs to dashless UUIDs to updating all relevant endpoints. We're talking about a complete execution of the plan, leaving no stone unturned. It’s like completing all the levels in a game – we want that “Mission Complete” screen! If a feature is part of the plan, it needs to be implemented, tested, and working perfectly.
- [ ] Code follows existing patterns and best practices: We're not just writing code; we're crafting it. It needs to fit in with the existing codebase and adhere to our established best practices. Think of it as writing a new chapter in a book – it needs to match the style and tone of the previous chapters. We're aiming for code that's not only functional but also maintainable and easy to understand. This involves sticking to conventions, using appropriate design patterns, and ensuring our code is consistent with the rest of the project. We're building on a foundation, not tearing it down.
- [ ] All functionality works as specified: No surprises here! Everything needs to work exactly as it's supposed to. This means thorough testing and validation to ensure our changes haven't introduced any unexpected behavior. This is crucial for maintaining the integrity of our system. We need to verify that each component, function, and endpoint performs its intended task flawlessly. It’s like conducting a symphony – every instrument needs to play its part in harmony. If there's a specification, our implementation needs to meet it. No ifs, ands, or buts.
- [ ] Integration with existing codebase is seamless: Our changes shouldn't cause any ripples in the existing codebase. They need to integrate smoothly, like a puzzle piece fitting perfectly into place. We're not looking to create a Frankenstein's monster of mismatched code. It’s about ensuring our additions enhance the system without disrupting its current functionality. This requires careful planning, testing, and collaboration to ensure compatibility and prevent conflicts. We want our changes to be a harmonious addition, not a disruptive force.
Code Quality
- [ ] Code is clean, readable, and well-documented: Think of our code as a well-organized library. It should be easy to navigate, understand, and maintain. This means clear naming conventions, logical structure, and plenty of comments to explain what's going on. We want our code to tell a story, not be a cryptic puzzle. It’s not just about writing code that works; it's about writing code that others can understand and build upon. This promotes collaboration, reduces maintenance costs, and ensures the longevity of our project.
- [ ] TypeScript types are properly defined: TypeScript is our safety net, and proper type definitions are the mesh that keeps us from falling. We need to make sure all our types are accurately defined to prevent runtime errors and make our code more robust. It's like having a detailed blueprint for our code – it helps us catch mistakes early and ensures everything fits together correctly. We want to leverage the full power of TypeScript to create a type-safe and reliable application. This includes using interfaces, generics, and other advanced features to define our data structures and ensure type consistency throughout our codebase.
- [ ] Error handling is comprehensive: Errors are inevitable, but how we handle them is what matters. We need to have a robust error-handling strategy that gracefully deals with unexpected situations. This means catching errors, logging them, and providing meaningful feedback to the user or system. Think of it as having a well-trained emergency response team – we want to be prepared for anything. We need to anticipate potential failure points and implement mechanisms to handle them gracefully. This includes using try-catch blocks, custom error classes, and other techniques to ensure our application remains stable and resilient.
- [ ] Performance considerations are addressed: We want our code to be not just good-looking but also performant. We need to think about efficiency, optimization, and avoiding bottlenecks. This might involve using efficient algorithms, caching data, or optimizing database queries. It’s like tuning a race car – we want it to run smoothly and quickly. Performance is a crucial aspect of user experience, and we need to ensure our application is responsive and efficient. This requires careful analysis, optimization, and testing to identify and address potential performance issues. We're not just building a car; we're building a Ferrari.
Testing
- [ ] Unit tests cover all new functionality: Unit tests are our first line of defense. They ensure that individual components of our code work as expected. We need to write comprehensive unit tests for all new functionality to catch bugs early and prevent regressions. Think of it as testing each Lego brick before building the entire castle – it ensures the foundation is solid. We want to isolate each component and verify its behavior in isolation. This provides a high degree of confidence in the correctness of our code and facilitates refactoring and maintenance.
- [ ] Integration tests verify end-to-end workflows: Integration tests take it a step further. They verify that different parts of our system work together correctly. We need to write integration tests to ensure our changes haven't broken any existing workflows. It's like testing the plumbing and electrical systems after renovating a house – we want to make sure everything works together seamlessly. We're testing the interactions between different components and services to ensure they function as a cohesive whole. This helps us identify issues that may not be apparent from unit tests alone.
- [ ] E2E tests cover user-facing features: End-to-end (E2E) tests simulate real user interactions with our application. They ensure that user-facing features work as expected from start to finish. We need to write E2E tests to verify the overall user experience. Think of it as a dress rehearsal before a play – we want to make sure the entire performance is flawless. We're testing the entire application stack, from the user interface to the database, to ensure a seamless and intuitive user experience. This is the ultimate test of our application's functionality and reliability.
- [ ] All existing tests continue to pass: Our changes shouldn't break existing functionality. We need to make sure all existing tests continue to pass after our changes are merged. This is crucial for maintaining the stability of our system. It's like ensuring the foundation of a building remains strong after adding a new floor – we don't want the whole thing to collapse. We have a responsibility to preserve the integrity of our existing codebase. This requires careful testing, analysis, and collaboration to prevent regressions and ensure our changes are non-disruptive.
Documentation
- [ ] Code is properly commented: Comments are our way of talking to future developers (including our future selves). We need to write clear and concise comments to explain what our code does and why. Think of it as leaving breadcrumbs for others to follow – it makes it easier to understand our code. We want our code to be self-documenting, with comments providing context and rationale for our design decisions. This promotes collaboration, reduces maintenance costs, and ensures the long-term maintainability of our codebase.
- [ ] API documentation is updated: If our changes affect the API, we need to update the API documentation to reflect those changes. This ensures that others can use our API correctly. It's like updating the instruction manual for a new appliance – it helps users understand how to use it. Accurate and up-to-date API documentation is crucial for the success of our project. It enables developers to integrate with our API effectively and efficiently. We need to ensure our documentation is comprehensive, clear, and easy to navigate.
- [ ] README files are updated if needed: The README file is the front door to our project. It should provide a clear overview of the project and instructions on how to get started. If our changes affect the setup or usage of the project, we need to update the README file. Think of it as the welcome mat to our project – it should be inviting and informative. A well-maintained README file is essential for onboarding new developers and ensuring the project is accessible to a wider audience. We need to provide clear instructions, examples, and guidance to help users get up and running quickly.
- [ ] Architecture decisions are documented: If our changes involve significant architectural decisions, we need to document those decisions. This helps others understand the rationale behind our choices and makes it easier to maintain the system in the future. It's like keeping a journal of our design process – it helps us remember why we made certain decisions. Documenting our architectural decisions provides valuable context for future development efforts. It helps us maintain consistency, avoid duplication of effort, and ensure the long-term maintainability of our system. We're building a legacy, not just a project.
Deployment & CI/CD
- [ ] Changes work in all environments: Our code needs to work not just on our local machine but in all environments, including development, staging, and production. We need to test our changes in each environment to ensure they function correctly. Think of it as testing our recipe in different ovens – we want to make sure the cake comes out perfectly every time. Consistency across environments is crucial for a smooth deployment process. We need to identify and address any environment-specific issues early in the development cycle. This requires careful configuration, testing, and monitoring.
- [ ] CI/CD pipeline passes successfully: Our Continuous Integration/Continuous Deployment (CI/CD) pipeline automates the process of building, testing, and deploying our code. We need to make sure our changes don't break the CI/CD pipeline. It's like having a well-oiled machine that takes our code from development to production – we want to keep it running smoothly. A robust CI/CD pipeline is essential for rapid and reliable deployments. We need to ensure our pipeline is configured correctly, runs efficiently, and provides timely feedback on the status of our builds and deployments.
- [ ] No breaking changes introduced: We need to avoid introducing breaking changes that could disrupt existing users or systems. We should strive for backward compatibility whenever possible. Think of it as renovating a building without disrupting the tenants – we want to make improvements without causing inconvenience. Breaking changes can have a significant impact on our users and systems. We need to carefully consider the implications of our changes and strive to maintain backward compatibility whenever possible. This requires thoughtful planning, design, and communication.
- [ ] Database migrations (if any) are tested: If our changes involve database migrations, we need to test those migrations thoroughly to ensure they are applied correctly. This is crucial for maintaining the integrity of our data. It's like performing surgery on a patient – we want to make sure we don't damage anything in the process. Database migrations can be complex and error-prone. We need to test them rigorously to ensure they are applied correctly and do not corrupt our data. This includes testing both the migration process itself and the resulting data schema.
Clean Architecture Compliance
- [ ] Dependencies flow in the correct direction: In Clean Architecture, dependencies should flow from the outer layers to the inner layers. We need to make sure our code adheres to this principle. Think of it as the flow of water in a river – it should flow downhill, not uphill. Proper dependency management is crucial for maintaining the integrity of our architecture. We need to ensure our dependencies are well-defined, managed, and enforced. This prevents circular dependencies, reduces coupling, and promotes modularity.
- [ ] Business logic is separated from infrastructure: Business logic should be independent of infrastructure concerns like databases and external APIs. We need to keep these concerns separate to maintain the flexibility and testability of our code. Think of it as separating the engine from the chassis in a car – it makes it easier to maintain and upgrade each component. Separation of concerns is a fundamental principle of Clean Architecture. We need to isolate our business logic from infrastructure dependencies to ensure it remains portable, testable, and maintainable.
- [ ] Domain layer remains independent: The domain layer contains our core business entities and logic. It should be completely independent of the rest of the application. We need to protect the domain layer from external dependencies. Think of it as the heart of our system – it should be protected from harm. A strong and independent domain layer is essential for a robust and maintainable application. We need to ensure our domain logic is free from external concerns and can evolve independently of the rest of the system.
- [ ] Proper abstraction layers are maintained: Abstraction layers help us decouple different parts of our system. We need to maintain proper abstraction layers to ensure our code is flexible and maintainable. Think of it as using adapters to connect different devices – it allows them to work together seamlessly. Abstraction is a key principle of good software design. We need to use abstraction layers to decouple our components, reduce complexity, and improve the maintainability of our system.
Technical Implementation Guidelines
Let's dive into the technical guidelines for implementing this task. These guidelines will help us maintain a clean, efficient, and well-structured codebase. Think of these as the rules of the road for our coding journey.
Clean Architecture Principles
Clean Architecture is the backbone of our project's structure. It's like the blueprint for a building, ensuring everything is organized and well-connected. Let's revisit the core principles:
1. Dependency Direction:
- Outer layers depend on inner layers only: This is a fundamental rule. Outer layers, like the UI or infrastructure, should depend on inner layers, like the domain logic. Think of it like a pyramid – the base layers don't depend on the top layers. This ensures that changes in outer layers don't ripple through the core of our application.
- Domain layer has no external dependencies: The domain layer is the heart of our application, containing our core business logic. It should be pure and independent, with no dependencies on external frameworks or libraries. This makes it highly testable and reusable.
- Application layer orchestrates domain logic: The application layer acts as a conductor, coordinating the domain logic to fulfill specific use cases. It depends on the domain layer but not on infrastructure concerns.
- Infrastructure implements interfaces from inner layers: The infrastructure layer handles external concerns like databases, APIs, and UI frameworks. It implements interfaces defined in the inner layers, allowing us to swap out infrastructure components without affecting the core logic.
2. Layer Organization:
Think of our application as a series of concentric circles, each with a specific responsibility:
- Domain Core: This is the innermost circle, containing our business entities, value objects, and domain services. It's the heart of our application, representing the core business logic.
- Application Core: This layer contains use cases, application services, and Data Transfer Objects (DTOs). It orchestrates the domain logic to fulfill specific application requirements.
- Infrastructure: This is the outermost layer, handling external concerns like databases, external APIs, and framework-specific code. It implements the interfaces defined in the inner layers.
3. SOLID Principles:
The SOLID principles are a set of guidelines for writing maintainable and scalable code. Think of them as the pillars of good object-oriented design:
- Single Responsibility: Each class should have one, and only one, reason to change. This makes our classes more focused and easier to maintain. It’s like a specialized tool – each tool should have a specific purpose.
- Open/Closed: Software entities should be open for extension but closed for modification. This means we should be able to add new functionality without changing existing code. It’s like building with Lego bricks – we can add new bricks without modifying the existing structure.
- Liskov Substitution: Subtypes must be substitutable for their base types. This ensures that our inheritance hierarchies are well-behaved and that we can use subtypes interchangeably with their base types. It's like having different types of cars – we should be able to drive them all in the same way.
- Interface Segregation: Many specific interfaces are better than one general-purpose interface. This prevents classes from being forced to implement methods they don't need. It’s like having specialized tools for specific tasks – we don't need a Swiss Army knife for every job.
- Dependency Inversion: Depend on abstractions, not concretions. This allows us to decouple our code and make it more testable and maintainable. It's like plugging devices into a power outlet – we depend on the power outlet, not the specific type of device.
Code Quality Standards
High-quality code is the cornerstone of a successful project. It's like building with solid materials – it ensures our application is robust and reliable. Let's review our code quality standards:
1. TypeScript Usage:
- Use strict mode and proper type definitions: Strict mode helps us catch potential errors at compile time. Proper type definitions ensure our code is type-safe and prevent runtime errors. It’s like having a safety net – it catches us before we fall.
- Avoid
any
type, use specific types orunknown
: Theany
type defeats the purpose of TypeScript. We should use specific types orunknown
to ensure type safety. It's like using the right tool for the job – we don't want to use a hammer when we need a screwdriver. - Define interfaces for all data structures: Interfaces help us define the structure of our data and ensure consistency across our codebase. It's like having a blueprint for our data – it ensures everything fits together correctly.
- Use generic types appropriately: Generic types allow us to write reusable code that can work with different data types. It's like having a versatile tool – it can be used for multiple purposes.
2. Error Handling:
- Use Result/Either patterns for error handling: Result/Either patterns provide a structured way to handle errors and prevent exceptions from crashing our application. It's like having a safety valve – it prevents pressure from building up.
- Provide meaningful error messages: Error messages should be clear, concise, and helpful. They should provide enough information for developers to diagnose and fix the issue. It's like leaving a trail of breadcrumbs – it helps us find our way back.
- Log errors at appropriate levels: We should log errors at different levels (e.g., debug, info, warn, error) to provide context and facilitate debugging. It's like having a security camera system – it records events at different levels of detail.
- Handle edge cases and validation errors: We need to anticipate potential edge cases and handle them gracefully. We should also validate user input to prevent errors and security vulnerabilities. It's like having a safety checklist – it ensures we haven't missed anything.
3. Testing Strategy:
- Unit tests for business logic (domain layer): Unit tests verify that individual components of our business logic work as expected. They should be isolated and focused on testing specific functionality. It's like testing each engine component before assembling it – it ensures everything works correctly.
- Integration tests for application services: Integration tests verify that different parts of our application work together correctly. They should test the interactions between components and ensure they function as a cohesive whole. It's like testing the entire engine after assembly – it ensures all the components work together seamlessly.
- E2E tests for complete user workflows: E2E tests simulate real user interactions with our application. They ensure that user-facing features work as expected from start to finish. It's like testing the entire car on a test track – it ensures the whole system works under real-world conditions.
- Mock external dependencies appropriately: Mocking allows us to isolate our code and test it independently of external dependencies. We should use mocking to simulate the behavior of databases, APIs, and other external systems. It's like using a simulator to train pilots – it allows us to practice in a safe environment.
Performance Considerations
Performance is a critical aspect of user experience. A fast and responsive application is essential for user satisfaction. Let's review our performance considerations:
- Use efficient algorithms and data structures: Choosing the right algorithms and data structures can significantly impact performance. We should select the most efficient options for our specific use cases. It's like choosing the right tool for the job – a hammer is better than a screwdriver for driving nails.
- Implement proper caching strategies: Caching can improve performance by reducing the number of database queries and API calls. We should implement caching strategies at different levels (e.g., client-side, server-side) to optimize performance. It's like having a shortcut – it allows us to get to our destination faster.
- Consider database query optimization: Database queries can be a major bottleneck in our application. We should optimize our queries to reduce execution time and improve performance. It's like tuning an engine – it makes it run more efficiently.
- Handle async operations properly: Asynchronous operations can improve performance by allowing our application to perform multiple tasks concurrently. We should use async/await or Promises to handle async operations properly. It's like multitasking – it allows us to do multiple things at the same time.
- Monitor memory usage and potential leaks: Memory leaks can lead to performance degradation and application crashes. We should monitor memory usage and identify potential leaks. It's like checking the oil level in a car – it prevents engine damage.
Development Commands Reference
To make our development process smoother and more efficient, let's take a look at some essential commands. These commands are like the keys to our coding kingdom!
Development Commands:
pnpm dev
- Start development server: This command fires up our development server, allowing us to see our changes in real-time. It's like turning on the lights in our coding lab.pnpm build
- Build the application: This command builds our application for deployment. It's like packaging our product for shipment.pnpm preview
- Preview the built application: This command lets us preview our built application before deploying it. It's like a sneak peek before the grand opening.
Testing Commands:
pnpm test
- Run all tests: This command runs all the tests in our project, ensuring our code is working as expected. It's like a thorough quality check.pnpm test:watch
- Run tests in watch mode: This command runs tests in watch mode, automatically rerunning them whenever we make changes. It's like having a vigilant test assistant.pnpm test:coverage
- Run tests with coverage: This command runs tests with coverage analysis, showing us which parts of our code are not covered by tests. It's like a test coverage map.pnpm domain
- Test domain: This command runs tests specifically for our domain logic. It's like a domain-specific checkup.pnpm application
- Test application-shared: This command runs tests for our application-shared components. It's like testing the shared parts of our application.pnpm utils
- Test utils-core: This command runs tests for our utils-core library. It's like testing our core utility functions.
API Testing:
pnpm test:api
- Run API tests: This command runs unit and integration tests for our API endpoints. It's like testing the individual API routes.pnpm endapi
- Run API E2E tests: This command runs end-to-end tests for our API, simulating real user interactions. It's like a full API workout.pnpm e2e:postman
- Run Postman tests: This command runs Postman tests, allowing us to test our API endpoints with a popular API testing tool. It's like using a trusted API testing tool.
UI Testing:
pnpm test:web
- Run PWA tests: This command runs tests for our Progressive Web App (PWA). It's like testing the web app components.pnpm endweb
- Run PWA E2E tests: This command runs end-to-end tests for our PWA, simulating real user interactions. It's like a full PWA test run.pnpm playwright
- Run Playwright tests: This command runs Playwright tests, allowing us to write end-to-end tests for our UI. It’s like using a powerful UI testing framework.
Code Quality:
pnpm lint
- Run linting: This command runs linting tools to identify code style issues. It's like a code style checkup.pnpm lint:fix
- Fix linting issues: This command automatically fixes many linting issues. It’s like a code style auto-repair tool.pnpm format
- Format code: This command formats our code according to our code style guidelines. It's like a code beautification service.pnpm typecheck
- Check TypeScript types: This command checks our TypeScript code for type errors. It's like a TypeScript safety net.
Coverage Analysis:
pnpm covapi
- API coverage: This command generates coverage reports for our API tests. It's like an API test coverage map.pnpm covweb
- PWA coverage: This command generates coverage reports for our PWA tests. It's like a PWA test coverage map.pnpm covdomain
- Domain coverage: This command generates coverage reports for our domain tests. It's like a domain test coverage map.pnpm covapplication
- Application coverage: This command generates coverage reports for our application tests. It's like an application test coverage map.pnpm covutils
- Utils coverage: This command generates coverage reports for our utils tests. It's like a utils test coverage map.
⚠️ CRITICAL: Commit Message Guidelines
Commit messages are crucial for tracking changes and collaborating effectively. Think of them as the history book of our project. Let's make sure we write them right!
🚨 FAILURE TO FOLLOW THESE RULES WILL CAUSE COMMIT FAILURES! 🚨
Listen up, folks! This is super important. If you don't follow these rules, your commits will be rejected. We're serious about this!
Format: type(scope): subject
This is the golden rule. Every commit message must follow this format. It’s like a secret code that unlocks the commit’s meaning.
Example: feat(api): implement user authentication system
This is a perfect example of a well-formatted commit message. It tells us the type of change (feat), the scope (api), and the subject (what was implemented).
Available Types:
feat
- A new feature (most common for comprehensive plans): This type is used when we're adding a new feature to the application. It's like adding a new room to a house.fix
- A bug fix: This type is used when we're fixing a bug in the code. It’s like patching a hole in the roof.refactor
- Code changes that neither fix bugs nor add features: This type is used when we're refactoring code, improving its structure or readability without changing its functionality. It's like rearranging furniture in a room.perf
- Performance improvements: This type is used when we're making performance improvements to the code. It’s like tuning an engine for better performance.test
- Adding missing tests or correcting existing tests: This type is used when we're adding or fixing tests. It's like testing the safety systems in a car.docs
- Documentation only changes: This type is used when we're making changes to the documentation. It’s like updating the instruction manual.style
- Formatting changes: This type is used when we're making formatting changes to the code. It’s like cleaning up the code to make it look nicer.build
- Build system or dependency changes: This type is used when we're making changes to the build system or dependencies. It’s like upgrading the tools in our workshop.ci
- CI configuration changes: This type is used when we're making changes to the CI configuration. It’s like setting up the automated testing system.chore
- Other changes: This type is used for any other changes that don't fit into the above categories. It’s like miscellaneous tasks around the house.
Scope Rules (REQUIRED):
The scope tells us which part of the project the commit affects. It’s like labeling the different sections of a report.
- Use kebab-case (lowercase with hyphens):
- Examples:
api
,web
,domain
,application-shared
,utils-core
- Use
auth
,api
,ui
,database
for feature-specific scopes
Subject Rules (REQUIRED):
The subject is a brief description of what the commit does. It’s like the headline of a news article.
- Start with lowercase letter or number
- No period at the end
- Header length limits vary by scope:
api-e2e
,web-e2e
,application-shared
: max 100 charactersdomain
: max 95 charactersapi
,web
: max 93 charactersutils-core
: max 90 characters- All other scopes: max 82 characters
- Be descriptive and specific about what was implemented
Multi-commit Guidelines:
For large implementations, break into logical commits. This makes it easier to review changes and understand the history of the project. It’s like breaking a large task into smaller, more manageable steps.
For example:
feat(domain): add user entity and value objects
feat(application-shared): implement authentication use cases
feat(api): add authentication endpoints
test(api): add comprehensive auth tests
Reference: See commitlint.config.ts
and .husky/commit-msg
for complete rules
These files contain the complete commit message rules for our project. Think of them as the official rulebook.
⚠️ Your commits will be automatically rejected if they don't follow these rules!
We're not kidding! If your commit messages don't follow these rules, they will be automatically rejected by our CI system. So, pay attention and get it right!
Additional Context
No response
And that's a wrap, folks! You've made it through the comprehensive guide on converting Doctor IDs to dashless UUIDs. Remember, this task is crucial for maintaining the consistency and efficiency of our API. By following these guidelines, you'll not only complete this task successfully but also contribute to the overall quality of our codebase. Keep up the great work, and happy coding!