Intercept is a small middleware that communicates between various services used in TYPO3 core or core-near world. Intercept can be found online at https://intercept.typo3.com.
Processes, setup, architecture and so on are found in this README.
Please check the information in legacy_hook/
folder
End point "/githubpr" - Hook fired by github TYPO3 core main mirror if a pull request has been pushed to github to transfer that PR to a forge issue and a gerrit review.
End point "/split" - Hook fired by github core main mirror https://github.com/typo3/typo3.cms/ for new pushes (merged patch / new tag), used to update the git split packages at https://github.com/typo3-cms/. Sub tree splitting and tagging takes a while, jobs are queued with a rabbitmq and a single symfony command cli worker does the main job.
Sends bitbucket webhook push events to packagist to update according packages as packagist currently cannot use the bitbucket server payloads directly. Requests to the hook have to be sent via https://intercept.typo3.com/bitbucketToPackagist?apiToken=token&username=user
Trigger subtree splitting and tagging manually.
Interface to deal with documentation rendering and management.
Interface to link other services to Discord webhooks.
intercept is sort of a spider that hangs in between different services to communicate and translate between them. On testing side, only the simple data munging parts are unit tested, the main testing logic lies in the functional tests. The coverage is very high to specify in detail what intercept does, and which data is expected from a given service.
Changes to intercept should go to the develop branch. This branch is deployed to stage https://stage.intercept.typo3.com/ - the master branch is updated by creating a new release via gitflow workflow and is deployed to live by github actionts.
Notes: the ddev based setup does currently NOT start the rabbitmq server and the core
split / tag worker. Some further setup is not fully finished, like valid credentials
for third party services. The documented setup has been created to allow easy
development of the documentation part - if more is needed, have a look at the .env
file and write proper values to a .env.local
file!
# Clone repo
ddev start
ddev composer install
ddev exec bin/console doctrine:migrations:migrate -n
ddev exec npm ci
ddev exec npm run build
git pull
ddev start
ddev composer install
ddev exec bin/console cache:clear
ddev exec bin/console doctrine:migrations:migrate -n
ddev exec npm ci
ddev exec npm run build
ddev exec -s rabbitmq rabbitmqadmin -u admin -p foo declare queue name=intercept-core-split-testing
If changing js / css / web images, files to public dirs need to be recompiled and published:
ddev exec yarn encore dev
An alternative is to start an explicit watcher process to recompile if css files change:
ddev exec yarn encore dev --watch
To use the Discord part of Intercept you need two variables in your .env.local
file
DISCORD_SERVER_ID
DISCORD_BOT_TOKEN
The server ID is the ID of the Discord server you wish to interact with. You can find this out by turning on developer
mode in Discord, right clicking the server, and then clicking copy id
.
The Bot is a token needed to interact with the Discord API. You can read more about these here. You also need to make sure the bot you are using is a member of the server you are using!
Once set up you must run the command bin/console app:discord-sync
. This commend will fetch a list of Discord channels to your Intercept installation.
In a production environment, this command should be set as a cronjob for roughly every 10 minutes.
To use Usercentrics you must set these variables in your .env.local
file
ddev composer t3g:test:php:unit
ddev composer t3g:test:php:functional
Find rendered coverage data at var/phpunit/coverage/index.html
ddev composer t3g:test:php:cover
ddev composer t3g:cgl
If the instance runs with ddev, a .env.local
file inside .ddev
is mandatory. It can be copied from the existing .env.example
file. For no performance profiling tasks, the values may be empty. Else, put your account data here. If in doubt, reach out to Susi.
git
, parallel
, docker
and ssh access to the docs server (DO NOT DO THIS ON SRV001!)var/data.db
from the live system before you start)# rm build dir if it exists
rm -Rf var/docs-build-information/
# generate build information - writes build information files to disk at var/docs-build-information/
./bin/console app:docs-dump-render-info --all
# go to the build folder
cd build
# download and update repositories
time find ../var/docs-build-information/ ! -type d | parallel --progress "./download-update.sh {}"
# render all documents (~1h on a modern notebook)
time find ../var/docs-build-information/ ! -type d | parallel --progress "./render.sh {}"
# move docs to final structure locally
time find ../var/docs-build-information/ ! -type d | parallel "./move.sh {}"
# rsync rendered docs to live
./upload.sh