2019-12-07 05:44:10 +03:00
|
|
|
// Copyright 2019 The Gitea Authors.
|
|
|
|
// All rights reserved.
|
2022-11-27 21:20:29 +03:00
|
|
|
// SPDX-License-Identifier: MIT
|
2019-12-07 05:44:10 +03:00
|
|
|
|
|
|
|
package pull
|
|
|
|
|
|
|
|
import (
|
2019-12-15 12:51:28 +03:00
|
|
|
"context"
|
2022-03-31 17:53:08 +03:00
|
|
|
"errors"
|
2019-12-07 05:44:10 +03:00
|
|
|
"fmt"
|
2020-02-03 02:19:58 +03:00
|
|
|
"strconv"
|
2019-12-07 05:44:10 +03:00
|
|
|
"strings"
|
|
|
|
|
|
|
|
"code.gitea.io/gitea/models"
|
2022-05-03 22:46:28 +03:00
|
|
|
"code.gitea.io/gitea/models/db"
|
2023-01-16 11:00:22 +03:00
|
|
|
git_model "code.gitea.io/gitea/models/git"
|
2022-06-13 12:37:59 +03:00
|
|
|
issues_model "code.gitea.io/gitea/models/issues"
|
2022-05-11 13:09:36 +03:00
|
|
|
access_model "code.gitea.io/gitea/models/perm/access"
|
2021-12-10 04:27:50 +03:00
|
|
|
repo_model "code.gitea.io/gitea/models/repo"
|
2021-11-09 22:57:58 +03:00
|
|
|
"code.gitea.io/gitea/models/unit"
|
2021-11-24 12:49:20 +03:00
|
|
|
user_model "code.gitea.io/gitea/models/user"
|
2019-12-07 05:44:10 +03:00
|
|
|
"code.gitea.io/gitea/modules/git"
|
Simplify how git repositories are opened (#28937)
## Purpose
This is a refactor toward building an abstraction over managing git
repositories.
Afterwards, it does not matter anymore if they are stored on the local
disk or somewhere remote.
## What this PR changes
We used `git.OpenRepository` everywhere previously.
Now, we should split them into two distinct functions:
Firstly, there are temporary repositories which do not change:
```go
git.OpenRepository(ctx, diskPath)
```
Gitea managed repositories having a record in the database in the
`repository` table are moved into the new package `gitrepo`:
```go
gitrepo.OpenRepository(ctx, repo_model.Repo)
```
Why is `repo_model.Repository` the second parameter instead of file
path?
Because then we can easily adapt our repository storage strategy.
The repositories can be stored locally, however, they could just as well
be stored on a remote server.
## Further changes in other PRs
- A Git Command wrapper on package `gitrepo` could be created. i.e.
`NewCommand(ctx, repo_model.Repository, commands...)`. `git.RunOpts{Dir:
repo.RepoPath()}`, the directory should be empty before invoking this
method and it can be filled in the function only. #28940
- Remove the `RepoPath()`/`WikiPath()` functions to reduce the
possibility of mistakes.
---------
Co-authored-by: delvh <dev.lh@web.de>
2024-01-27 23:09:51 +03:00
|
|
|
"code.gitea.io/gitea/modules/gitrepo"
|
2019-12-15 12:51:28 +03:00
|
|
|
"code.gitea.io/gitea/modules/graceful"
|
2019-12-07 05:44:10 +03:00
|
|
|
"code.gitea.io/gitea/modules/log"
|
2022-01-20 02:26:57 +03:00
|
|
|
"code.gitea.io/gitea/modules/process"
|
2020-02-03 02:19:58 +03:00
|
|
|
"code.gitea.io/gitea/modules/queue"
|
2019-12-07 05:44:10 +03:00
|
|
|
"code.gitea.io/gitea/modules/timeutil"
|
2022-03-31 17:53:08 +03:00
|
|
|
asymkey_service "code.gitea.io/gitea/services/asymkey"
|
2023-09-05 21:37:47 +03:00
|
|
|
notify_service "code.gitea.io/gitea/services/notify"
|
2019-12-07 05:44:10 +03:00
|
|
|
)
|
|
|
|
|
2022-05-02 02:54:44 +03:00
|
|
|
// prPatchCheckerQueue represents a queue to handle update pull request tests
|
Rewrite queue (#24505)
# ⚠️ Breaking
Many deprecated queue config options are removed (actually, they should
have been removed in 1.18/1.19).
If you see the fatal message when starting Gitea: "Please update your
app.ini to remove deprecated config options", please follow the error
messages to remove these options from your app.ini.
Example:
```
2023/05/06 19:39:22 [E] Removed queue option: `[indexer].ISSUE_INDEXER_QUEUE_TYPE`. Use new options in `[queue.issue_indexer]`
2023/05/06 19:39:22 [E] Removed queue option: `[indexer].UPDATE_BUFFER_LEN`. Use new options in `[queue.issue_indexer]`
2023/05/06 19:39:22 [F] Please update your app.ini to remove deprecated config options
```
Many options in `[queue]` are are dropped, including:
`WRAP_IF_NECESSARY`, `MAX_ATTEMPTS`, `TIMEOUT`, `WORKERS`,
`BLOCK_TIMEOUT`, `BOOST_TIMEOUT`, `BOOST_WORKERS`, they can be removed
from app.ini.
# The problem
The old queue package has some legacy problems:
* complexity: I doubt few people could tell how it works.
* maintainability: Too many channels and mutex/cond are mixed together,
too many different structs/interfaces depends each other.
* stability: due to the complexity & maintainability, sometimes there
are strange bugs and difficult to debug, and some code doesn't have test
(indeed some code is difficult to test because a lot of things are mixed
together).
* general applicability: although it is called "queue", its behavior is
not a well-known queue.
* scalability: it doesn't seem easy to make it work with a cluster
without breaking its behaviors.
It came from some very old code to "avoid breaking", however, its
technical debt is too heavy now. It's a good time to introduce a better
"queue" package.
# The new queue package
It keeps using old config and concept as much as possible.
* It only contains two major kinds of concepts:
* The "base queue": channel, levelqueue, redis
* They have the same abstraction, the same interface, and they are
tested by the same testing code.
* The "WokerPoolQueue", it uses the "base queue" to provide "worker
pool" function, calls the "handler" to process the data in the base
queue.
* The new code doesn't do "PushBack"
* Think about a queue with many workers, the "PushBack" can't guarantee
the order for re-queued unhandled items, so in new code it just does
"normal push"
* The new code doesn't do "pause/resume"
* The "pause/resume" was designed to handle some handler's failure: eg:
document indexer (elasticsearch) is down
* If a queue is paused for long time, either the producers blocks or the
new items are dropped.
* The new code doesn't do such "pause/resume" trick, it's not a common
queue's behavior and it doesn't help much.
* If there are unhandled items, the "push" function just blocks for a
few seconds and then re-queue them and retry.
* The new code doesn't do "worker booster"
* Gitea's queue's handlers are light functions, the cost is only the
go-routine, so it doesn't make sense to "boost" them.
* The new code only use "max worker number" to limit the concurrent
workers.
* The new "Push" never blocks forever
* Instead of creating more and more blocking goroutines, return an error
is more friendly to the server and to the end user.
There are more details in code comments: eg: the "Flush" problem, the
strange "code.index" hanging problem, the "immediate" queue problem.
Almost ready for review.
TODO:
* [x] add some necessary comments during review
* [x] add some more tests if necessary
* [x] update documents and config options
* [x] test max worker / active worker
* [x] re-run the CI tasks to see whether any test is flaky
* [x] improve the `handleOldLengthConfiguration` to provide more
friendly messages
* [x] fine tune default config values (eg: length?)
## Code coverage:
![image](https://user-images.githubusercontent.com/2114189/236620635-55576955-f95d-4810-b12f-879026a3afdf.png)
2023-05-08 14:49:59 +03:00
|
|
|
var prPatchCheckerQueue *queue.WorkerPoolQueue[string]
|
2019-12-07 05:44:10 +03:00
|
|
|
|
2022-03-31 17:53:08 +03:00
|
|
|
var (
|
2022-05-02 02:54:44 +03:00
|
|
|
ErrIsClosed = errors.New("pull is closed")
|
|
|
|
ErrUserNotAllowedToMerge = models.ErrDisallowedToMerge{}
|
2022-03-31 17:53:08 +03:00
|
|
|
ErrHasMerged = errors.New("has already been merged")
|
|
|
|
ErrIsWorkInProgress = errors.New("work in progress PRs cannot be merged")
|
2022-04-20 17:43:15 +03:00
|
|
|
ErrIsChecking = errors.New("cannot merge while conflict checking is in progress")
|
2024-05-08 19:11:43 +03:00
|
|
|
ErrNotMergeableState = errors.New("not in mergeable state")
|
2022-03-31 17:53:08 +03:00
|
|
|
ErrDependenciesLeft = errors.New("is blocked by an open dependency")
|
|
|
|
)
|
|
|
|
|
2019-12-07 05:44:10 +03:00
|
|
|
// AddToTaskQueue adds itself to pull request test task queue.
|
2023-07-22 17:14:27 +03:00
|
|
|
func AddToTaskQueue(ctx context.Context, pr *issues_model.PullRequest) {
|
Rewrite queue (#24505)
# ⚠️ Breaking
Many deprecated queue config options are removed (actually, they should
have been removed in 1.18/1.19).
If you see the fatal message when starting Gitea: "Please update your
app.ini to remove deprecated config options", please follow the error
messages to remove these options from your app.ini.
Example:
```
2023/05/06 19:39:22 [E] Removed queue option: `[indexer].ISSUE_INDEXER_QUEUE_TYPE`. Use new options in `[queue.issue_indexer]`
2023/05/06 19:39:22 [E] Removed queue option: `[indexer].UPDATE_BUFFER_LEN`. Use new options in `[queue.issue_indexer]`
2023/05/06 19:39:22 [F] Please update your app.ini to remove deprecated config options
```
Many options in `[queue]` are are dropped, including:
`WRAP_IF_NECESSARY`, `MAX_ATTEMPTS`, `TIMEOUT`, `WORKERS`,
`BLOCK_TIMEOUT`, `BOOST_TIMEOUT`, `BOOST_WORKERS`, they can be removed
from app.ini.
# The problem
The old queue package has some legacy problems:
* complexity: I doubt few people could tell how it works.
* maintainability: Too many channels and mutex/cond are mixed together,
too many different structs/interfaces depends each other.
* stability: due to the complexity & maintainability, sometimes there
are strange bugs and difficult to debug, and some code doesn't have test
(indeed some code is difficult to test because a lot of things are mixed
together).
* general applicability: although it is called "queue", its behavior is
not a well-known queue.
* scalability: it doesn't seem easy to make it work with a cluster
without breaking its behaviors.
It came from some very old code to "avoid breaking", however, its
technical debt is too heavy now. It's a good time to introduce a better
"queue" package.
# The new queue package
It keeps using old config and concept as much as possible.
* It only contains two major kinds of concepts:
* The "base queue": channel, levelqueue, redis
* They have the same abstraction, the same interface, and they are
tested by the same testing code.
* The "WokerPoolQueue", it uses the "base queue" to provide "worker
pool" function, calls the "handler" to process the data in the base
queue.
* The new code doesn't do "PushBack"
* Think about a queue with many workers, the "PushBack" can't guarantee
the order for re-queued unhandled items, so in new code it just does
"normal push"
* The new code doesn't do "pause/resume"
* The "pause/resume" was designed to handle some handler's failure: eg:
document indexer (elasticsearch) is down
* If a queue is paused for long time, either the producers blocks or the
new items are dropped.
* The new code doesn't do such "pause/resume" trick, it's not a common
queue's behavior and it doesn't help much.
* If there are unhandled items, the "push" function just blocks for a
few seconds and then re-queue them and retry.
* The new code doesn't do "worker booster"
* Gitea's queue's handlers are light functions, the cost is only the
go-routine, so it doesn't make sense to "boost" them.
* The new code only use "max worker number" to limit the concurrent
workers.
* The new "Push" never blocks forever
* Instead of creating more and more blocking goroutines, return an error
is more friendly to the server and to the end user.
There are more details in code comments: eg: the "Flush" problem, the
strange "code.index" hanging problem, the "immediate" queue problem.
Almost ready for review.
TODO:
* [x] add some necessary comments during review
* [x] add some more tests if necessary
* [x] update documents and config options
* [x] test max worker / active worker
* [x] re-run the CI tasks to see whether any test is flaky
* [x] improve the `handleOldLengthConfiguration` to provide more
friendly messages
* [x] fine tune default config values (eg: length?)
## Code coverage:
![image](https://user-images.githubusercontent.com/2114189/236620635-55576955-f95d-4810-b12f-879026a3afdf.png)
2023-05-08 14:49:59 +03:00
|
|
|
pr.Status = issues_model.PullRequestStatusChecking
|
2023-07-22 17:14:27 +03:00
|
|
|
err := pr.UpdateColsIfNotMerged(ctx, "status")
|
Rewrite queue (#24505)
# ⚠️ Breaking
Many deprecated queue config options are removed (actually, they should
have been removed in 1.18/1.19).
If you see the fatal message when starting Gitea: "Please update your
app.ini to remove deprecated config options", please follow the error
messages to remove these options from your app.ini.
Example:
```
2023/05/06 19:39:22 [E] Removed queue option: `[indexer].ISSUE_INDEXER_QUEUE_TYPE`. Use new options in `[queue.issue_indexer]`
2023/05/06 19:39:22 [E] Removed queue option: `[indexer].UPDATE_BUFFER_LEN`. Use new options in `[queue.issue_indexer]`
2023/05/06 19:39:22 [F] Please update your app.ini to remove deprecated config options
```
Many options in `[queue]` are are dropped, including:
`WRAP_IF_NECESSARY`, `MAX_ATTEMPTS`, `TIMEOUT`, `WORKERS`,
`BLOCK_TIMEOUT`, `BOOST_TIMEOUT`, `BOOST_WORKERS`, they can be removed
from app.ini.
# The problem
The old queue package has some legacy problems:
* complexity: I doubt few people could tell how it works.
* maintainability: Too many channels and mutex/cond are mixed together,
too many different structs/interfaces depends each other.
* stability: due to the complexity & maintainability, sometimes there
are strange bugs and difficult to debug, and some code doesn't have test
(indeed some code is difficult to test because a lot of things are mixed
together).
* general applicability: although it is called "queue", its behavior is
not a well-known queue.
* scalability: it doesn't seem easy to make it work with a cluster
without breaking its behaviors.
It came from some very old code to "avoid breaking", however, its
technical debt is too heavy now. It's a good time to introduce a better
"queue" package.
# The new queue package
It keeps using old config and concept as much as possible.
* It only contains two major kinds of concepts:
* The "base queue": channel, levelqueue, redis
* They have the same abstraction, the same interface, and they are
tested by the same testing code.
* The "WokerPoolQueue", it uses the "base queue" to provide "worker
pool" function, calls the "handler" to process the data in the base
queue.
* The new code doesn't do "PushBack"
* Think about a queue with many workers, the "PushBack" can't guarantee
the order for re-queued unhandled items, so in new code it just does
"normal push"
* The new code doesn't do "pause/resume"
* The "pause/resume" was designed to handle some handler's failure: eg:
document indexer (elasticsearch) is down
* If a queue is paused for long time, either the producers blocks or the
new items are dropped.
* The new code doesn't do such "pause/resume" trick, it's not a common
queue's behavior and it doesn't help much.
* If there are unhandled items, the "push" function just blocks for a
few seconds and then re-queue them and retry.
* The new code doesn't do "worker booster"
* Gitea's queue's handlers are light functions, the cost is only the
go-routine, so it doesn't make sense to "boost" them.
* The new code only use "max worker number" to limit the concurrent
workers.
* The new "Push" never blocks forever
* Instead of creating more and more blocking goroutines, return an error
is more friendly to the server and to the end user.
There are more details in code comments: eg: the "Flush" problem, the
strange "code.index" hanging problem, the "immediate" queue problem.
Almost ready for review.
TODO:
* [x] add some necessary comments during review
* [x] add some more tests if necessary
* [x] update documents and config options
* [x] test max worker / active worker
* [x] re-run the CI tasks to see whether any test is flaky
* [x] improve the `handleOldLengthConfiguration` to provide more
friendly messages
* [x] fine tune default config values (eg: length?)
## Code coverage:
![image](https://user-images.githubusercontent.com/2114189/236620635-55576955-f95d-4810-b12f-879026a3afdf.png)
2023-05-08 14:49:59 +03:00
|
|
|
if err != nil {
|
|
|
|
log.Error("AddToTaskQueue(%-v).UpdateCols.(add to queue): %v", pr, err)
|
|
|
|
return
|
|
|
|
}
|
|
|
|
log.Trace("Adding %-v to the test pull requests queue", pr)
|
|
|
|
err = prPatchCheckerQueue.Push(strconv.FormatInt(pr.ID, 10))
|
2021-06-09 22:52:55 +03:00
|
|
|
if err != nil && err != queue.ErrAlreadyInQueue {
|
2023-02-04 02:11:48 +03:00
|
|
|
log.Error("Error adding %-v to the test pull requests queue: %v", pr, err)
|
2021-06-09 22:52:55 +03:00
|
|
|
}
|
2019-12-07 05:44:10 +03:00
|
|
|
}
|
|
|
|
|
2023-02-21 17:42:07 +03:00
|
|
|
type MergeCheckType int
|
|
|
|
|
|
|
|
const (
|
|
|
|
MergeCheckTypeGeneral MergeCheckType = iota // general merge checks for "merge", "rebase", "squash", etc
|
|
|
|
MergeCheckTypeManually // Manually Merged button (mark a PR as merged manually)
|
|
|
|
MergeCheckTypeAuto // Auto Merge (Scheduled Merge) After Checks Succeed
|
|
|
|
)
|
|
|
|
|
2024-05-08 19:11:43 +03:00
|
|
|
// CheckPullMergeable check if the pull mergeable based on all conditions (branch protection, merge options, ...)
|
|
|
|
func CheckPullMergeable(stdCtx context.Context, doer *user_model.User, perm *access_model.Permission, pr *issues_model.PullRequest, mergeCheckType MergeCheckType, adminSkipProtectionCheck bool) error {
|
2022-11-12 23:18:50 +03:00
|
|
|
return db.WithTx(stdCtx, func(ctx context.Context) error {
|
2022-05-03 22:46:28 +03:00
|
|
|
if pr.HasMerged {
|
|
|
|
return ErrHasMerged
|
|
|
|
}
|
2022-03-31 17:53:08 +03:00
|
|
|
|
2022-11-19 11:12:33 +03:00
|
|
|
if err := pr.LoadIssue(ctx); err != nil {
|
2023-02-04 02:11:48 +03:00
|
|
|
log.Error("Unable to load issue[%d] for %-v: %v", pr.IssueID, pr, err)
|
2022-05-03 22:46:28 +03:00
|
|
|
return err
|
|
|
|
} else if pr.Issue.IsClosed {
|
|
|
|
return ErrIsClosed
|
|
|
|
}
|
2022-03-31 17:53:08 +03:00
|
|
|
|
2022-05-03 22:46:28 +03:00
|
|
|
if allowedMerge, err := IsUserAllowedToMerge(ctx, pr, *perm, doer); err != nil {
|
2023-02-04 02:11:48 +03:00
|
|
|
log.Error("Error whilst checking if %-v is allowed to merge %-v: %v", doer, pr, err)
|
2022-05-03 22:46:28 +03:00
|
|
|
return err
|
|
|
|
} else if !allowedMerge {
|
|
|
|
return ErrUserNotAllowedToMerge
|
|
|
|
}
|
2022-03-31 17:53:08 +03:00
|
|
|
|
2023-02-21 17:42:07 +03:00
|
|
|
if mergeCheckType == MergeCheckTypeManually {
|
|
|
|
// if doer is doing "manually merge" (mark as merged manually), do not check anything
|
2022-05-03 22:46:28 +03:00
|
|
|
return nil
|
|
|
|
}
|
2022-03-31 17:53:08 +03:00
|
|
|
|
2023-10-11 07:24:07 +03:00
|
|
|
if pr.IsWorkInProgress(ctx) {
|
2022-05-03 22:46:28 +03:00
|
|
|
return ErrIsWorkInProgress
|
|
|
|
}
|
2022-03-31 17:53:08 +03:00
|
|
|
|
2022-07-13 11:22:51 +03:00
|
|
|
if !pr.CanAutoMerge() && !pr.IsEmpty() {
|
2024-05-08 19:11:43 +03:00
|
|
|
return ErrNotMergeableState
|
2022-05-03 22:46:28 +03:00
|
|
|
}
|
2022-03-31 17:53:08 +03:00
|
|
|
|
2022-05-03 22:46:28 +03:00
|
|
|
if pr.IsChecking() {
|
|
|
|
return ErrIsChecking
|
|
|
|
}
|
2022-04-20 17:43:15 +03:00
|
|
|
|
2024-03-28 23:41:52 +03:00
|
|
|
if pb, err := CheckPullBranchProtections(ctx, pr, false); err != nil {
|
2023-02-04 02:11:48 +03:00
|
|
|
if !models.IsErrDisallowedToMerge(err) {
|
|
|
|
log.Error("Error whilst checking pull branch protection for %-v: %v", pr, err)
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
2023-02-21 17:42:07 +03:00
|
|
|
// Now the branch protection check failed, check whether the failure could be skipped (skip by setting err = nil)
|
|
|
|
|
|
|
|
// * when doing Auto Merge (Scheduled Merge After Checks Succeed), skip the branch protection check
|
|
|
|
if mergeCheckType == MergeCheckTypeAuto {
|
|
|
|
err = nil
|
|
|
|
}
|
|
|
|
|
2024-03-28 23:41:52 +03:00
|
|
|
// * if the doer is admin, they could skip the branch protection check,
|
|
|
|
// if that's allowed by the protected branch rule.
|
2024-06-01 11:45:20 +03:00
|
|
|
if adminSkipProtectionCheck {
|
|
|
|
if doer.IsAdmin {
|
|
|
|
err = nil // instance admin can skip the check, so clear the error
|
|
|
|
} else if !pb.ApplyToAdmins {
|
|
|
|
if isRepoAdmin, errCheckAdmin := access_model.IsUserRepoAdmin(ctx, pr.BaseRepo, doer); errCheckAdmin != nil {
|
|
|
|
log.Error("Unable to check if %-v is a repo admin in %-v: %v", doer, pr.BaseRepo, errCheckAdmin)
|
|
|
|
return errCheckAdmin
|
|
|
|
} else if isRepoAdmin {
|
|
|
|
err = nil // repo admin can skip the check, so clear the error
|
|
|
|
}
|
2023-02-21 17:42:07 +03:00
|
|
|
}
|
2023-02-04 02:11:48 +03:00
|
|
|
}
|
|
|
|
|
2023-02-21 17:42:07 +03:00
|
|
|
// If there is still a branch protection check error, return it
|
|
|
|
if err != nil {
|
2022-05-03 22:46:28 +03:00
|
|
|
return err
|
2022-03-31 17:53:08 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2022-05-03 22:46:28 +03:00
|
|
|
if _, err := isSignedIfRequired(ctx, pr, doer); err != nil {
|
|
|
|
return err
|
|
|
|
}
|
2022-03-31 17:53:08 +03:00
|
|
|
|
2022-06-13 12:37:59 +03:00
|
|
|
if noDeps, err := issues_model.IssueNoDependenciesLeft(ctx, pr.Issue); err != nil {
|
2022-05-03 22:46:28 +03:00
|
|
|
return err
|
|
|
|
} else if !noDeps {
|
|
|
|
return ErrDependenciesLeft
|
|
|
|
}
|
2022-03-31 17:53:08 +03:00
|
|
|
|
2022-05-03 22:46:28 +03:00
|
|
|
return nil
|
2022-11-12 23:18:50 +03:00
|
|
|
})
|
2022-03-31 17:53:08 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
// isSignedIfRequired check if merge will be signed if required
|
2022-06-13 12:37:59 +03:00
|
|
|
func isSignedIfRequired(ctx context.Context, pr *issues_model.PullRequest, doer *user_model.User) (bool, error) {
|
2023-01-16 11:00:22 +03:00
|
|
|
pb, err := git_model.GetFirstMatchProtectedBranchRule(ctx, pr.BaseRepoID, pr.BaseBranch)
|
|
|
|
if err != nil {
|
2022-03-31 17:53:08 +03:00
|
|
|
return false, err
|
|
|
|
}
|
|
|
|
|
2023-01-16 11:00:22 +03:00
|
|
|
if pb == nil || !pb.RequireSignedCommits {
|
2022-03-31 17:53:08 +03:00
|
|
|
return true, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
sign, _, _, err := asymkey_service.SignMerge(ctx, pr, doer, pr.BaseRepo.RepoPath(), pr.BaseBranch, pr.GetGitRefName())
|
|
|
|
|
|
|
|
return sign, err
|
|
|
|
}
|
|
|
|
|
2019-12-07 05:44:10 +03:00
|
|
|
// checkAndUpdateStatus checks if pull request is possible to leaving checking status,
|
|
|
|
// and set to be either conflict or mergeable.
|
2022-11-19 11:12:33 +03:00
|
|
|
func checkAndUpdateStatus(ctx context.Context, pr *issues_model.PullRequest) {
|
2023-02-04 02:11:48 +03:00
|
|
|
// If status has not been changed to conflict by testPatch then we are mergeable
|
2022-06-13 12:37:59 +03:00
|
|
|
if pr.Status == issues_model.PullRequestStatusChecking {
|
|
|
|
pr.Status = issues_model.PullRequestStatusMergeable
|
2019-12-07 05:44:10 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
// Make sure there is no waiting test to process before leaving the checking status.
|
2022-05-02 02:54:44 +03:00
|
|
|
has, err := prPatchCheckerQueue.Has(strconv.FormatInt(pr.ID, 10))
|
2020-02-03 02:19:58 +03:00
|
|
|
if err != nil {
|
2023-02-04 02:11:48 +03:00
|
|
|
log.Error("Unable to check if the queue is waiting to reprocess %-v. Error: %v", pr, err)
|
2020-02-03 02:19:58 +03:00
|
|
|
}
|
|
|
|
|
2023-02-04 02:11:48 +03:00
|
|
|
if has {
|
|
|
|
log.Trace("Not updating status for %-v as it is due to be rechecked", pr)
|
|
|
|
return
|
|
|
|
}
|
|
|
|
|
|
|
|
if err := pr.UpdateColsIfNotMerged(ctx, "merge_base", "status", "conflicted_files", "changed_protected_files"); err != nil {
|
|
|
|
log.Error("Update[%-v]: %v", pr, err)
|
2019-12-07 05:44:10 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2023-02-04 02:11:48 +03:00
|
|
|
// getMergeCommit checks if a pull request has been merged
|
2019-12-07 05:44:10 +03:00
|
|
|
// Returns the git.Commit of the pull request if merged
|
2022-06-13 12:37:59 +03:00
|
|
|
func getMergeCommit(ctx context.Context, pr *issues_model.PullRequest) (*git.Commit, error) {
|
2023-02-04 02:11:48 +03:00
|
|
|
if err := pr.LoadBaseRepo(ctx); err != nil {
|
|
|
|
return nil, fmt.Errorf("unable to load base repo for %s: %w", pr, err)
|
2019-12-16 08:17:55 +03:00
|
|
|
}
|
2019-12-07 05:44:10 +03:00
|
|
|
|
2023-02-04 02:11:48 +03:00
|
|
|
prHeadRef := pr.GetGitRefName()
|
2019-12-07 05:44:10 +03:00
|
|
|
|
2023-02-04 02:11:48 +03:00
|
|
|
// Check if the pull request is merged into BaseBranch
|
|
|
|
if _, _, err := git.NewCommand(ctx, "merge-base", "--is-ancestor").
|
|
|
|
AddDynamicArguments(prHeadRef, pr.BaseBranch).
|
|
|
|
RunStdString(&git.RunOpts{Dir: pr.BaseRepo.RepoPath()}); err != nil {
|
2019-12-07 05:44:10 +03:00
|
|
|
if strings.Contains(err.Error(), "exit status 1") {
|
2023-02-04 02:11:48 +03:00
|
|
|
// prHeadRef is not an ancestor of the base branch
|
2019-12-07 05:44:10 +03:00
|
|
|
return nil, nil
|
|
|
|
}
|
2023-02-04 02:11:48 +03:00
|
|
|
// Errors are signaled by a non-zero status that is not 1
|
|
|
|
return nil, fmt.Errorf("%-v git merge-base --is-ancestor: %w", pr, err)
|
2019-12-07 05:44:10 +03:00
|
|
|
}
|
|
|
|
|
2023-02-04 02:11:48 +03:00
|
|
|
// If merge-base successfully exits then prHeadRef is an ancestor of pr.BaseBranch
|
|
|
|
|
|
|
|
// Find the head commit id
|
|
|
|
prHeadCommitID, err := git.GetFullCommitID(ctx, pr.BaseRepo.RepoPath(), prHeadRef)
|
2019-12-07 05:44:10 +03:00
|
|
|
if err != nil {
|
2023-02-04 02:11:48 +03:00
|
|
|
return nil, fmt.Errorf("GetFullCommitID(%s) in %s: %w", prHeadRef, pr.BaseRepo.FullName(), err)
|
2019-12-07 05:44:10 +03:00
|
|
|
}
|
|
|
|
|
Simplify how git repositories are opened (#28937)
## Purpose
This is a refactor toward building an abstraction over managing git
repositories.
Afterwards, it does not matter anymore if they are stored on the local
disk or somewhere remote.
## What this PR changes
We used `git.OpenRepository` everywhere previously.
Now, we should split them into two distinct functions:
Firstly, there are temporary repositories which do not change:
```go
git.OpenRepository(ctx, diskPath)
```
Gitea managed repositories having a record in the database in the
`repository` table are moved into the new package `gitrepo`:
```go
gitrepo.OpenRepository(ctx, repo_model.Repo)
```
Why is `repo_model.Repository` the second parameter instead of file
path?
Because then we can easily adapt our repository storage strategy.
The repositories can be stored locally, however, they could just as well
be stored on a remote server.
## Further changes in other PRs
- A Git Command wrapper on package `gitrepo` could be created. i.e.
`NewCommand(ctx, repo_model.Repository, commands...)`. `git.RunOpts{Dir:
repo.RepoPath()}`, the directory should be empty before invoking this
method and it can be filled in the function only. #28940
- Remove the `RepoPath()`/`WikiPath()` functions to reduce the
possibility of mistakes.
---------
Co-authored-by: delvh <dev.lh@web.de>
2024-01-27 23:09:51 +03:00
|
|
|
gitRepo, err := gitrepo.OpenRepository(ctx, pr.BaseRepo)
|
2023-12-14 00:02:00 +03:00
|
|
|
if err != nil {
|
|
|
|
return nil, fmt.Errorf("%-v OpenRepository: %w", pr.BaseRepo, err)
|
|
|
|
}
|
|
|
|
defer gitRepo.Close()
|
|
|
|
|
2024-02-24 09:55:19 +03:00
|
|
|
objectFormat := git.ObjectFormatFromName(pr.BaseRepo.ObjectFormatName)
|
2023-12-14 00:02:00 +03:00
|
|
|
|
2019-12-07 05:44:10 +03:00
|
|
|
// Get the commit from BaseBranch where the pull request got merged
|
2023-02-04 02:11:48 +03:00
|
|
|
mergeCommit, _, err := git.NewCommand(ctx, "rev-list", "--ancestry-path", "--merges", "--reverse").
|
|
|
|
AddDynamicArguments(prHeadCommitID + ".." + pr.BaseBranch).
|
|
|
|
RunStdString(&git.RunOpts{Dir: pr.BaseRepo.RepoPath()})
|
2019-12-07 05:44:10 +03:00
|
|
|
if err != nil {
|
2022-10-24 22:29:17 +03:00
|
|
|
return nil, fmt.Errorf("git rev-list --ancestry-path --merges --reverse: %w", err)
|
2023-12-14 00:02:00 +03:00
|
|
|
} else if len(mergeCommit) < objectFormat.FullLength() {
|
2021-03-04 06:41:23 +03:00
|
|
|
// PR was maybe fast-forwarded, so just use last commit of PR
|
2023-02-04 02:11:48 +03:00
|
|
|
mergeCommit = prHeadCommitID
|
2019-12-07 05:44:10 +03:00
|
|
|
}
|
2023-02-04 02:11:48 +03:00
|
|
|
mergeCommit = strings.TrimSpace(mergeCommit)
|
2019-12-07 05:44:10 +03:00
|
|
|
|
2023-02-04 02:11:48 +03:00
|
|
|
commit, err := gitRepo.GetCommit(mergeCommit)
|
2019-12-07 05:44:10 +03:00
|
|
|
if err != nil {
|
2023-02-04 02:11:48 +03:00
|
|
|
return nil, fmt.Errorf("GetMergeCommit[%s]: %w", mergeCommit, err)
|
2019-12-07 05:44:10 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
return commit, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
// manuallyMerged checks if a pull request got manually merged
|
|
|
|
// When a pull request got manually merged mark the pull request as merged
|
2022-06-13 12:37:59 +03:00
|
|
|
func manuallyMerged(ctx context.Context, pr *issues_model.PullRequest) bool {
|
2022-11-19 11:12:33 +03:00
|
|
|
if err := pr.LoadBaseRepo(ctx); err != nil {
|
2023-02-04 02:11:48 +03:00
|
|
|
log.Error("%-v LoadBaseRepo: %v", pr, err)
|
2021-03-04 06:41:23 +03:00
|
|
|
return false
|
|
|
|
}
|
|
|
|
|
2022-12-10 05:46:31 +03:00
|
|
|
if unit, err := pr.BaseRepo.GetUnit(ctx, unit.TypePullRequests); err == nil {
|
2021-03-04 06:41:23 +03:00
|
|
|
config := unit.PullRequestsConfig()
|
|
|
|
if !config.AutodetectManualMerge {
|
|
|
|
return false
|
|
|
|
}
|
|
|
|
} else {
|
2023-02-04 02:11:48 +03:00
|
|
|
log.Error("%-v BaseRepo.GetUnit(unit.TypePullRequests): %v", pr, err)
|
2021-03-04 06:41:23 +03:00
|
|
|
return false
|
|
|
|
}
|
|
|
|
|
2022-01-20 02:26:57 +03:00
|
|
|
commit, err := getMergeCommit(ctx, pr)
|
2019-12-07 05:44:10 +03:00
|
|
|
if err != nil {
|
2023-02-04 02:11:48 +03:00
|
|
|
log.Error("%-v getMergeCommit: %v", pr, err)
|
|
|
|
return false
|
|
|
|
}
|
|
|
|
|
|
|
|
if commit == nil {
|
|
|
|
// no merge commit found
|
2019-12-07 05:44:10 +03:00
|
|
|
return false
|
|
|
|
}
|
2023-02-04 02:11:48 +03:00
|
|
|
|
|
|
|
pr.MergedCommitID = commit.ID.String()
|
|
|
|
pr.MergedUnix = timeutil.TimeStamp(commit.Author.When.Unix())
|
|
|
|
pr.Status = issues_model.PullRequestStatusManuallyMerged
|
Add context cache as a request level cache (#22294)
To avoid duplicated load of the same data in an HTTP request, we can set
a context cache to do that. i.e. Some pages may load a user from a
database with the same id in different areas on the same page. But the
code is hidden in two different deep logic. How should we share the
user? As a result of this PR, now if both entry functions accept
`context.Context` as the first parameter and we just need to refactor
`GetUserByID` to reuse the user from the context cache. Then it will not
be loaded twice on an HTTP request.
But of course, sometimes we would like to reload an object from the
database, that's why `RemoveContextData` is also exposed.
The core context cache is here. It defines a new context
```go
type cacheContext struct {
ctx context.Context
data map[any]map[any]any
lock sync.RWMutex
}
var cacheContextKey = struct{}{}
func WithCacheContext(ctx context.Context) context.Context {
return context.WithValue(ctx, cacheContextKey, &cacheContext{
ctx: ctx,
data: make(map[any]map[any]any),
})
}
```
Then you can use the below 4 methods to read/write/del the data within
the same context.
```go
func GetContextData(ctx context.Context, tp, key any) any
func SetContextData(ctx context.Context, tp, key, value any)
func RemoveContextData(ctx context.Context, tp, key any)
func GetWithContextCache[T any](ctx context.Context, cacheGroupKey string, cacheTargetID any, f func() (T, error)) (T, error)
```
Then let's take a look at how `system.GetString` implement it.
```go
func GetSetting(ctx context.Context, key string) (string, error) {
return cache.GetWithContextCache(ctx, contextCacheKey, key, func() (string, error) {
return cache.GetString(genSettingCacheKey(key), func() (string, error) {
res, err := GetSettingNoCache(ctx, key)
if err != nil {
return "", err
}
return res.SettingValue, nil
})
})
}
```
First, it will check if context data include the setting object with the
key. If not, it will query from the global cache which may be memory or
a Redis cache. If not, it will get the object from the database. In the
end, if the object gets from the global cache or database, it will be
set into the context cache.
An object stored in the context cache will only be destroyed after the
context disappeared.
2023-02-15 16:37:34 +03:00
|
|
|
merger, _ := user_model.GetUserByEmail(ctx, commit.Author.Email)
|
2023-02-04 02:11:48 +03:00
|
|
|
|
|
|
|
// When the commit author is unknown set the BaseRepo owner as merger
|
|
|
|
if merger == nil {
|
|
|
|
if pr.BaseRepo.Owner == nil {
|
2023-02-18 15:11:03 +03:00
|
|
|
if err = pr.BaseRepo.LoadOwner(ctx); err != nil {
|
|
|
|
log.Error("%-v BaseRepo.LoadOwner: %v", pr, err)
|
2023-02-04 02:11:48 +03:00
|
|
|
return false
|
2019-12-07 05:44:10 +03:00
|
|
|
}
|
|
|
|
}
|
2023-02-04 02:11:48 +03:00
|
|
|
merger = pr.BaseRepo.Owner
|
|
|
|
}
|
|
|
|
pr.Merger = merger
|
|
|
|
pr.MergerID = merger.ID
|
2019-12-07 05:44:10 +03:00
|
|
|
|
2023-02-04 02:11:48 +03:00
|
|
|
if merged, err := pr.SetMerged(ctx); err != nil {
|
|
|
|
log.Error("%-v setMerged : %v", pr, err)
|
|
|
|
return false
|
|
|
|
} else if !merged {
|
|
|
|
return false
|
|
|
|
}
|
2019-12-16 00:57:34 +03:00
|
|
|
|
2023-09-05 21:37:47 +03:00
|
|
|
notify_service.MergePullRequest(ctx, merger, pr)
|
2019-12-16 00:57:34 +03:00
|
|
|
|
2023-02-04 02:11:48 +03:00
|
|
|
log.Info("manuallyMerged[%-v]: Marked as manually merged into %s/%s by commit id: %s", pr, pr.BaseRepo.Name, pr.BaseBranch, commit.ID.String())
|
|
|
|
return true
|
2019-12-07 05:44:10 +03:00
|
|
|
}
|
|
|
|
|
2020-02-03 02:19:58 +03:00
|
|
|
// InitializePullRequests checks and tests untested patches of pull requests.
|
|
|
|
func InitializePullRequests(ctx context.Context) {
|
2023-09-15 09:13:19 +03:00
|
|
|
prs, err := issues_model.GetPullRequestIDsByCheckStatus(ctx, issues_model.PullRequestStatusChecking)
|
2020-02-03 02:19:58 +03:00
|
|
|
if err != nil {
|
|
|
|
log.Error("Find Checking PRs: %v", err)
|
|
|
|
return
|
|
|
|
}
|
|
|
|
for _, prID := range prs {
|
|
|
|
select {
|
|
|
|
case <-ctx.Done():
|
2019-12-15 12:51:28 +03:00
|
|
|
return
|
2020-02-03 02:19:58 +03:00
|
|
|
default:
|
Rewrite queue (#24505)
# ⚠️ Breaking
Many deprecated queue config options are removed (actually, they should
have been removed in 1.18/1.19).
If you see the fatal message when starting Gitea: "Please update your
app.ini to remove deprecated config options", please follow the error
messages to remove these options from your app.ini.
Example:
```
2023/05/06 19:39:22 [E] Removed queue option: `[indexer].ISSUE_INDEXER_QUEUE_TYPE`. Use new options in `[queue.issue_indexer]`
2023/05/06 19:39:22 [E] Removed queue option: `[indexer].UPDATE_BUFFER_LEN`. Use new options in `[queue.issue_indexer]`
2023/05/06 19:39:22 [F] Please update your app.ini to remove deprecated config options
```
Many options in `[queue]` are are dropped, including:
`WRAP_IF_NECESSARY`, `MAX_ATTEMPTS`, `TIMEOUT`, `WORKERS`,
`BLOCK_TIMEOUT`, `BOOST_TIMEOUT`, `BOOST_WORKERS`, they can be removed
from app.ini.
# The problem
The old queue package has some legacy problems:
* complexity: I doubt few people could tell how it works.
* maintainability: Too many channels and mutex/cond are mixed together,
too many different structs/interfaces depends each other.
* stability: due to the complexity & maintainability, sometimes there
are strange bugs and difficult to debug, and some code doesn't have test
(indeed some code is difficult to test because a lot of things are mixed
together).
* general applicability: although it is called "queue", its behavior is
not a well-known queue.
* scalability: it doesn't seem easy to make it work with a cluster
without breaking its behaviors.
It came from some very old code to "avoid breaking", however, its
technical debt is too heavy now. It's a good time to introduce a better
"queue" package.
# The new queue package
It keeps using old config and concept as much as possible.
* It only contains two major kinds of concepts:
* The "base queue": channel, levelqueue, redis
* They have the same abstraction, the same interface, and they are
tested by the same testing code.
* The "WokerPoolQueue", it uses the "base queue" to provide "worker
pool" function, calls the "handler" to process the data in the base
queue.
* The new code doesn't do "PushBack"
* Think about a queue with many workers, the "PushBack" can't guarantee
the order for re-queued unhandled items, so in new code it just does
"normal push"
* The new code doesn't do "pause/resume"
* The "pause/resume" was designed to handle some handler's failure: eg:
document indexer (elasticsearch) is down
* If a queue is paused for long time, either the producers blocks or the
new items are dropped.
* The new code doesn't do such "pause/resume" trick, it's not a common
queue's behavior and it doesn't help much.
* If there are unhandled items, the "push" function just blocks for a
few seconds and then re-queue them and retry.
* The new code doesn't do "worker booster"
* Gitea's queue's handlers are light functions, the cost is only the
go-routine, so it doesn't make sense to "boost" them.
* The new code only use "max worker number" to limit the concurrent
workers.
* The new "Push" never blocks forever
* Instead of creating more and more blocking goroutines, return an error
is more friendly to the server and to the end user.
There are more details in code comments: eg: the "Flush" problem, the
strange "code.index" hanging problem, the "immediate" queue problem.
Almost ready for review.
TODO:
* [x] add some necessary comments during review
* [x] add some more tests if necessary
* [x] update documents and config options
* [x] test max worker / active worker
* [x] re-run the CI tasks to see whether any test is flaky
* [x] improve the `handleOldLengthConfiguration` to provide more
friendly messages
* [x] fine tune default config values (eg: length?)
## Code coverage:
![image](https://user-images.githubusercontent.com/2114189/236620635-55576955-f95d-4810-b12f-879026a3afdf.png)
2023-05-08 14:49:59 +03:00
|
|
|
log.Trace("Adding PR[%d] to the pull requests patch checking queue", prID)
|
|
|
|
if err := prPatchCheckerQueue.Push(strconv.FormatInt(prID, 10)); err != nil {
|
2023-02-04 02:11:48 +03:00
|
|
|
log.Error("Error adding PR[%d] to the pull requests patch checking queue %v", prID, err)
|
2019-12-15 12:51:28 +03:00
|
|
|
}
|
2019-12-07 05:44:10 +03:00
|
|
|
}
|
2020-02-03 02:19:58 +03:00
|
|
|
}
|
|
|
|
}
|
2019-12-07 05:44:10 +03:00
|
|
|
|
2020-02-03 02:19:58 +03:00
|
|
|
// handle passed PR IDs and test the PRs
|
Rewrite queue (#24505)
# ⚠️ Breaking
Many deprecated queue config options are removed (actually, they should
have been removed in 1.18/1.19).
If you see the fatal message when starting Gitea: "Please update your
app.ini to remove deprecated config options", please follow the error
messages to remove these options from your app.ini.
Example:
```
2023/05/06 19:39:22 [E] Removed queue option: `[indexer].ISSUE_INDEXER_QUEUE_TYPE`. Use new options in `[queue.issue_indexer]`
2023/05/06 19:39:22 [E] Removed queue option: `[indexer].UPDATE_BUFFER_LEN`. Use new options in `[queue.issue_indexer]`
2023/05/06 19:39:22 [F] Please update your app.ini to remove deprecated config options
```
Many options in `[queue]` are are dropped, including:
`WRAP_IF_NECESSARY`, `MAX_ATTEMPTS`, `TIMEOUT`, `WORKERS`,
`BLOCK_TIMEOUT`, `BOOST_TIMEOUT`, `BOOST_WORKERS`, they can be removed
from app.ini.
# The problem
The old queue package has some legacy problems:
* complexity: I doubt few people could tell how it works.
* maintainability: Too many channels and mutex/cond are mixed together,
too many different structs/interfaces depends each other.
* stability: due to the complexity & maintainability, sometimes there
are strange bugs and difficult to debug, and some code doesn't have test
(indeed some code is difficult to test because a lot of things are mixed
together).
* general applicability: although it is called "queue", its behavior is
not a well-known queue.
* scalability: it doesn't seem easy to make it work with a cluster
without breaking its behaviors.
It came from some very old code to "avoid breaking", however, its
technical debt is too heavy now. It's a good time to introduce a better
"queue" package.
# The new queue package
It keeps using old config and concept as much as possible.
* It only contains two major kinds of concepts:
* The "base queue": channel, levelqueue, redis
* They have the same abstraction, the same interface, and they are
tested by the same testing code.
* The "WokerPoolQueue", it uses the "base queue" to provide "worker
pool" function, calls the "handler" to process the data in the base
queue.
* The new code doesn't do "PushBack"
* Think about a queue with many workers, the "PushBack" can't guarantee
the order for re-queued unhandled items, so in new code it just does
"normal push"
* The new code doesn't do "pause/resume"
* The "pause/resume" was designed to handle some handler's failure: eg:
document indexer (elasticsearch) is down
* If a queue is paused for long time, either the producers blocks or the
new items are dropped.
* The new code doesn't do such "pause/resume" trick, it's not a common
queue's behavior and it doesn't help much.
* If there are unhandled items, the "push" function just blocks for a
few seconds and then re-queue them and retry.
* The new code doesn't do "worker booster"
* Gitea's queue's handlers are light functions, the cost is only the
go-routine, so it doesn't make sense to "boost" them.
* The new code only use "max worker number" to limit the concurrent
workers.
* The new "Push" never blocks forever
* Instead of creating more and more blocking goroutines, return an error
is more friendly to the server and to the end user.
There are more details in code comments: eg: the "Flush" problem, the
strange "code.index" hanging problem, the "immediate" queue problem.
Almost ready for review.
TODO:
* [x] add some necessary comments during review
* [x] add some more tests if necessary
* [x] update documents and config options
* [x] test max worker / active worker
* [x] re-run the CI tasks to see whether any test is flaky
* [x] improve the `handleOldLengthConfiguration` to provide more
friendly messages
* [x] fine tune default config values (eg: length?)
## Code coverage:
![image](https://user-images.githubusercontent.com/2114189/236620635-55576955-f95d-4810-b12f-879026a3afdf.png)
2023-05-08 14:49:59 +03:00
|
|
|
func handler(items ...string) []string {
|
|
|
|
for _, s := range items {
|
|
|
|
id, _ := strconv.ParseInt(s, 10, 64)
|
2022-01-20 02:26:57 +03:00
|
|
|
testPR(id)
|
|
|
|
}
|
2022-01-23 00:22:14 +03:00
|
|
|
return nil
|
2022-01-20 02:26:57 +03:00
|
|
|
}
|
2019-12-15 12:51:28 +03:00
|
|
|
|
2022-01-20 02:26:57 +03:00
|
|
|
func testPR(id int64) {
|
2022-05-04 19:06:23 +03:00
|
|
|
pullWorkingPool.CheckIn(fmt.Sprint(id))
|
|
|
|
defer pullWorkingPool.CheckOut(fmt.Sprint(id))
|
2022-01-20 02:26:57 +03:00
|
|
|
ctx, _, finished := process.GetManager().AddContext(graceful.GetManager().HammerContext(), fmt.Sprintf("Test PR[%d] from patch checking queue", id))
|
|
|
|
defer finished()
|
|
|
|
|
2022-06-13 12:37:59 +03:00
|
|
|
pr, err := issues_model.GetPullRequestByID(ctx, id)
|
2022-01-20 02:26:57 +03:00
|
|
|
if err != nil {
|
2023-02-04 02:11:48 +03:00
|
|
|
log.Error("Unable to GetPullRequestByID[%d] for testPR: %v", id, err)
|
2022-01-20 02:26:57 +03:00
|
|
|
return
|
|
|
|
}
|
|
|
|
|
2023-02-04 02:11:48 +03:00
|
|
|
log.Trace("Testing %-v", pr)
|
|
|
|
defer func() {
|
|
|
|
log.Trace("Done testing %-v (status: %s)", pr, pr.Status)
|
|
|
|
}()
|
|
|
|
|
2022-01-20 02:26:57 +03:00
|
|
|
if pr.HasMerged {
|
2023-02-04 02:11:48 +03:00
|
|
|
log.Trace("%-v is already merged (status: %s, merge commit: %s)", pr, pr.Status, pr.MergedCommitID)
|
2022-01-20 02:26:57 +03:00
|
|
|
return
|
|
|
|
}
|
|
|
|
|
|
|
|
if manuallyMerged(ctx, pr) {
|
2023-02-04 02:11:48 +03:00
|
|
|
log.Trace("%-v is manually merged (status: %s, merge commit: %s)", pr, pr.Status, pr.MergedCommitID)
|
2022-01-20 02:26:57 +03:00
|
|
|
return
|
|
|
|
}
|
|
|
|
|
|
|
|
if err := TestPatch(pr); err != nil {
|
2023-02-04 02:11:48 +03:00
|
|
|
log.Error("testPatch[%-v]: %v", pr, err)
|
2022-06-13 12:37:59 +03:00
|
|
|
pr.Status = issues_model.PullRequestStatusError
|
2023-10-11 07:24:07 +03:00
|
|
|
if err := pr.UpdateCols(ctx, "status"); err != nil {
|
2023-02-04 02:11:48 +03:00
|
|
|
log.Error("update pr [%-v] status to PullRequestStatusError failed: %v", pr, err)
|
2019-12-07 05:44:10 +03:00
|
|
|
}
|
2022-01-20 02:26:57 +03:00
|
|
|
return
|
2019-12-07 05:44:10 +03:00
|
|
|
}
|
2022-11-19 11:12:33 +03:00
|
|
|
checkAndUpdateStatus(ctx, pr)
|
2019-12-07 05:44:10 +03:00
|
|
|
}
|
|
|
|
|
2023-01-16 11:00:22 +03:00
|
|
|
// CheckPRsForBaseBranch check all pulls with baseBrannch
|
2023-07-22 17:14:27 +03:00
|
|
|
func CheckPRsForBaseBranch(ctx context.Context, baseRepo *repo_model.Repository, baseBranchName string) error {
|
|
|
|
prs, err := issues_model.GetUnmergedPullRequestsByBaseInfo(ctx, baseRepo.ID, baseBranchName)
|
2020-10-13 21:50:57 +03:00
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
|
|
|
for _, pr := range prs {
|
2023-07-22 17:14:27 +03:00
|
|
|
AddToTaskQueue(ctx, pr)
|
2020-10-13 21:50:57 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2019-12-07 05:44:10 +03:00
|
|
|
// Init runs the task queue to test all the checking status pull requests
|
2020-02-03 02:19:58 +03:00
|
|
|
func Init() error {
|
Improve queue and logger context (#24924)
Before there was a "graceful function": RunWithShutdownFns, it's mainly
for some modules which doesn't support context.
The old queue system doesn't work well with context, so the old queues
need it.
After the queue refactoring, the new queue works with context well, so,
use Golang context as much as possible, the `RunWithShutdownFns` could
be removed (replaced by RunWithCancel for context cancel mechanism), the
related code could be simplified.
This PR also fixes some legacy queue-init problems, eg:
* typo : archiver: "unable to create codes indexer queue" => "unable to
create repo-archive queue"
* no nil check for failed queues, which causes unfriendly panic
After this PR, many goroutines could have better display name:
![image](https://github.com/go-gitea/gitea/assets/2114189/701b2a9b-8065-4137-aeaa-0bda2b34604a)
![image](https://github.com/go-gitea/gitea/assets/2114189/f1d5f50f-0534-40f0-b0be-f2c9daa5fe92)
2023-05-26 10:31:55 +03:00
|
|
|
prPatchCheckerQueue = queue.CreateUniqueQueue(graceful.GetManager().ShutdownContext(), "pr_patch_checker", handler)
|
2020-02-03 02:19:58 +03:00
|
|
|
|
2022-05-02 02:54:44 +03:00
|
|
|
if prPatchCheckerQueue == nil {
|
Improve queue and logger context (#24924)
Before there was a "graceful function": RunWithShutdownFns, it's mainly
for some modules which doesn't support context.
The old queue system doesn't work well with context, so the old queues
need it.
After the queue refactoring, the new queue works with context well, so,
use Golang context as much as possible, the `RunWithShutdownFns` could
be removed (replaced by RunWithCancel for context cancel mechanism), the
related code could be simplified.
This PR also fixes some legacy queue-init problems, eg:
* typo : archiver: "unable to create codes indexer queue" => "unable to
create repo-archive queue"
* no nil check for failed queues, which causes unfriendly panic
After this PR, many goroutines could have better display name:
![image](https://github.com/go-gitea/gitea/assets/2114189/701b2a9b-8065-4137-aeaa-0bda2b34604a)
![image](https://github.com/go-gitea/gitea/assets/2114189/f1d5f50f-0534-40f0-b0be-f2c9daa5fe92)
2023-05-26 10:31:55 +03:00
|
|
|
return fmt.Errorf("unable to create pr_patch_checker queue")
|
2020-02-03 02:19:58 +03:00
|
|
|
}
|
|
|
|
|
Improve queue and logger context (#24924)
Before there was a "graceful function": RunWithShutdownFns, it's mainly
for some modules which doesn't support context.
The old queue system doesn't work well with context, so the old queues
need it.
After the queue refactoring, the new queue works with context well, so,
use Golang context as much as possible, the `RunWithShutdownFns` could
be removed (replaced by RunWithCancel for context cancel mechanism), the
related code could be simplified.
This PR also fixes some legacy queue-init problems, eg:
* typo : archiver: "unable to create codes indexer queue" => "unable to
create repo-archive queue"
* no nil check for failed queues, which causes unfriendly panic
After this PR, many goroutines could have better display name:
![image](https://github.com/go-gitea/gitea/assets/2114189/701b2a9b-8065-4137-aeaa-0bda2b34604a)
![image](https://github.com/go-gitea/gitea/assets/2114189/f1d5f50f-0534-40f0-b0be-f2c9daa5fe92)
2023-05-26 10:31:55 +03:00
|
|
|
go graceful.GetManager().RunWithCancel(prPatchCheckerQueue)
|
2020-02-03 02:19:58 +03:00
|
|
|
go graceful.GetManager().RunWithShutdownContext(InitializePullRequests)
|
|
|
|
return nil
|
2019-12-07 05:44:10 +03:00
|
|
|
}
|