Ensure UnitTest coverage at least 85% #16
Labels
No labels
Blocked
Bounty
$100
Bounty
$1000
Bounty
$10000
Bounty
$20
Bounty
$2000
Bounty
$250
Bounty
$50
Bounty
$500
Bounty
$5000
Bounty
$750
MoSCoW
Could have
MoSCoW
Must have
MoSCoW
Should have
Needs feedback
Points
1
Points
13
Points
2
Points
21
Points
3
Points
34
Points
5
Points
55
Points
8
Points
88
Priority
Backlog
Priority
Critical
Priority
High
Priority
Low
Priority
Medium
Signed-off: Owner
Signed-off: Scrum Master
Signed-off: Tech Lead
Spike
State
Completed
State
Duplicate
State
In Progress
State
In Review
State
Paused
State
Unverified
State
Verified
State
Wont Do
Type
Bug
Type
Discussion
Type
Documentation
Type
Epic
Type
Feature
Type
Legendary
Type
Support
Type
Task
Type
Testing
No project
No assignees
4 participants
Notifications
Due date
No due date set.
Blocks
Depends on
You do not have permission to read 2 dependencies
#47 Implement unit test
clevermicro/amq-adapter-python
Reference: clevermicro/amq-adapter-python#16
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Ref epic: CleverSwarm Onboarding #58
Ensure proper unit test coverage, currently test coverage is 42%
@stanislav.hejny @aleenaumair
This ticket is also lacking priority (which needs to be set to backlog) as well as a milestone. Make sure this is fixed before moving forward. I MoSCoWed it though generally that would wait until the ticket is complete.
Hi,
Just want to reference a previous conversation around this, for documenting it; we discussed how setting Priority: Backlog overrides the Priority:High/ Med/ Low. So, until 'Backlog' gets moved to 'Status' label, we will be assigning high/med/low priority and keeping state 'unverified/verified' to depict tickets that are not 'In Progress' yet, meaning that they are still part of the backlog.
Please let us know as soon as the Backlog label is moved to status queue, so that we can start adding that to the tickets. @freemo
In addition, the 'Type' label also needs editing, to differentiate type (task, epic and legendary) and category (bug, feature, config, documentation, support, discussion).
cc: @eugen.thaci
Currently, Stan and I are working on the unit test. The estimated points are 8, but it actually taking much longer than it. In theory, writing unit tests should be easy since all the features are implemented, and all you need to do is to write code to ensure the functionality of them. But in reality, bugs may be found and refactor may be required for easy testing.
In today's case, I'm writing unit tests for
amqp/adapter/pydantic_serializer.py
. As suggested by Aleena, I downloaded cursor ai editor and started generating unit tests (currently with a free 7-day trial). Generating unit test is fast, review the generated code is also not hard, everything looks good to me. At this point, including download, it takes less than 5 minutes. For the rest of the day, I'm fixing the code so the unit test can pass.I run the unit test, it failed, I start fixing it. The first problem is related to how python finds classes. For the pydantic serializer, we need to know the class before we deserialize it. The code records the class name before the actual data so we know which class to use when deserializing. However, in python, a class name is not enough to locate a class, you also need a module path. This is relatively easy to fix since llm gives me the correct code to do that.
I rerun the test again, now the deserialization is failing, due to broken data. I added several print statements to see what data is being fed to the method, and noticed an issue with serialization. The serializer uses comma to separate key value pairs, which is generally fine until your value can also contain comma. For example, dict and list. When deserializing, the input string will be split by comma, this will break dick and list, causing an error. I fixed it by using JSON.
I rerun the test again, and got a new error. But now the error is triggered by the test code, saying a field should be an object, but after deserialization, it becomes a dict. In the serializer, I saw code that handles the nested model specifically, but it doesn't run. After some debugging, I found the issue caused by
obj.model_dump()
. This pydantic method will export the model, but turning everything into dict. In this process, nested models will be also converted to dict, making the code failed to check if it's a nested model or not. By not using the converted value, we can pull the original value using the field name, and continue the process. However, in the deserializer, the input string doesn't give anything about the type, so we need to look up the type in the class. This is where the llm doesn't help: with python 3.9, you have an old way to do that; with python 3.10, you have a new way to do that. And with pydantic v1 and v2, there are also different APIs. LLM's output failed to keep up to date, and I spend a bunch of time to figure out how python's type system working, how pydantic helps on that, and the most time-consuming part, how to handle union type likeFoo | None
.At the end of the day, everything is fixed and the coverage for that file is 89%, and the overall coverage rate is bumped by 3%. But this process takes me a whole day, largely due to my unfamiliarity with python and related library. I expect this should be improved as I keep working on the codebase and gain more and more experience with python, the codebase and related libraries.