Тёмный
Middleware Technologies
Middleware Technologies
Middleware Technologies
Подписаться
Hello Everyone,

Welcome to my Channel Middleware Technologies!!

This Channel is all about Middleware Technologies that i have been working through out my carrier and some articles on my interested programming and scripting languages like Java, Python and Bash.

I am majorly involved in Managing and Maintaining Middleware infrastructure consisting on WebSphere Application Server, IHS, Tomcat, Apache Middleware technologies with a part of infrastructure being migrated into Cloud using Openstack and Chef.
Also involved in automating the Middleware Infrastructure and helping in streamlining the Development and Deployment process in organization using DevOps tools like Maven, Jenkins, Nexus and GIT along with Automation Scripting and Programming using Java, Python and Bash.

Thanks
SB
How to use GnuPG to Sign and Encrypt your data
44:14
9 месяцев назад
How to setup and configure a Redis Instance
24:22
9 месяцев назад
How to manage passwords using pass cli utility
25:00
10 месяцев назад
How to setup NFS server using Vagrant and Ansible
37:49
11 месяцев назад
Комментарии
@PrashantKumar-ub9tu
@PrashantKumar-ub9tu День назад
@MiddlewareTechnologies, I repeated all the steps, but for me, that didn't work, Is there any alternate way I could connect with you to find out the solution for the same
@karunabhardwaj4919
@karunabhardwaj4919 5 дней назад
Can I package .bin file as rpm package. It’s an ibm installer that needs packaging
@WilliamPuntorieri
@WilliamPuntorieri 21 день назад
When u arrive at step12 he ask u "Enter LDAP Password", but we haven't set before, so how can i change or set it?
@MiddlewareTechnologies
@MiddlewareTechnologies 20 дней назад
Hi@@WilliamPuntorieri that's the password that we configured in step7 - LMDB database definition - olcRootPW
@bsudhir6
@bsudhir6 28 дней назад
panic: fail to read configuration, err: While parsing config: yaml: did not find expected node content
@selinoktay8372
@selinoktay8372 Месяц назад
do you have the "code" on github (like config.yml etc, any written source)? the blog you mentioned isnt linked, thank you.
@MiddlewareTechnologies
@MiddlewareTechnologies Месяц назад
HI @selinoktay8372 , sorry i linked the blog ref in the description now. Here is the link for your ref - middlewaretechnologies.in/2023/02/how-to-integrate-opensearch-with-ldap-for-authentication-and-authorization.html
@selinoktay8372
@selinoktay8372 Месяц назад
@@MiddlewareTechnologies thank you sooo much
@prasadzungarepatil4696
@prasadzungarepatil4696 2 месяца назад
How to send this traces to jagger?
@javiglesias
@javiglesias 2 месяца назад
Thanks for the presentation. How about log rotation? I've read the documentation but couldn't find anything about it. I don't think the plugin supports it. Do you know a solution or workaround for this issue? The logs, especially it you are logging req / resp, can get big quite fast.
@MiddlewareTechnologies
@MiddlewareTechnologies Месяц назад
HI @javiglesias, Agreed that the json logs can get quite big overtime. I see the following plugin - apisix.apache.org/docs/apisix/plugins/log-rotate/ available for log rotation though i haven't tried it. You may want to apply it and see if that helps..
@javiglesias
@javiglesias Месяц назад
@@MiddlewareTechnologies Thanks for the reply. We've tried the logrotate plugin but it only rotates access and error logs, so one option is to modify the plugin to support additional logs files. We didn't want to do that, so instead we've ended up installing the logrotate linux tool and a cron expression to achieve the rotation. After that we ship the logs via filebeat to an elasticsearch cluster. So far no problem.
@chandinims3350
@chandinims3350 3 месяца назад
Hi.. After switching into the context im not able to access the cluster. getting the error as follows error: You must be logged in to the server (Unauthorized). Can u help ? @MiddlewareTechnologies
@MiddlewareTechnologies
@MiddlewareTechnologies 2 месяца назад
HI @@chandinims3350, that errors probably means your kube config contains incorrect authentication token details or it might have expired. Please troubleshoot carefully and you should be able to find the root cause.
@sukhvirnetkaur8044
@sukhvirnetkaur8044 3 месяца назад
Thank you so much! Your clear explanation helped resolve my issue
@ameyabhyankar4944
@ameyabhyankar4944 3 месяца назад
Thank you for the video. Looking forward for the opensearch installation video on the 2.13.0 version with Fluentbit 3.0.x version in k8s thank you.
@Ssanthya
@Ssanthya 3 месяца назад
Thank you, i think latest version of dependency check is asking for NVD key and not the older versions, is nvd key is cost applicable or its free?
@MiddlewareTechnologies
@MiddlewareTechnologies 3 месяца назад
It's free you can procure your personal API key for personal use. Just need to register with the required details
@Ssanthya
@Ssanthya 3 месяца назад
@@MiddlewareTechnologies i need the key to use in azure devops pipeline for my org, so i can provide my org name and get the key ?
@MiddlewareTechnologies
@MiddlewareTechnologies 3 месяца назад
@@Ssanthya check with your org first on that
@user-qr4po8pu9i
@user-qr4po8pu9i 3 месяца назад
印度英语 made me crazy
@MiddlewareTechnologies
@MiddlewareTechnologies 2 месяца назад
Hi @user-qr4po8pu9i , Sorry for the trouble, i am trying to improve and make better sounded video. I know how difficult it is to get to another accent.
@user-qr4po8pu9i
@user-qr4po8pu9i 3 месяца назад
印度英语 made me crazy
@MiddlewareTechnologies
@MiddlewareTechnologies 2 месяца назад
Hi @user-qr4po8pu9i , Sorry for the trouble, i am trying to improve and make better sounded video. I know how difficult it is to get to another accent.
@tamilarasanp9285
@tamilarasanp9285 3 месяца назад
I configurated apisix but i can't access the internet without specifying the port number(9080) How can i access without port number.
@user-do4nq3fr6t
@user-do4nq3fr6t 4 месяца назад
please make a video on how to make a rescue copy of the repository and restore
@testingutopia
@testingutopia 5 месяцев назад
Thanks for putting in the efforts to catalog this... So from an external tools perspective, we have CIS benchmark, Trivy, Falco.. What about SECCOMP, APPARMOR, OPA etc. Are they relevant or not for the certification?
@MiddlewareTechnologies
@MiddlewareTechnologies 5 месяцев назад
Thanks @testingutopia, yes they are also important topics for the exam. seccomp and apparmor fall under the category of securitycontext topic and
@sadiqkavungal990
@sadiqkavungal990 5 месяцев назад
Hi Flocks, Please use the environment as OTEL_EXPORTER_OTLP_ENDPOINT: "otel-collector:4317" to communicate the Python application and OTel collector/
@vinayrudrapati5554
@vinayrudrapati5554 5 месяцев назад
Hi bro, This video has too much information.Can you also make a video using Prometheus.
@user-tc1st4hr6d
@user-tc1st4hr6d 5 месяцев назад
hi, my case run on ubuntu22.04 . data-prepper reports the following errors,do you know how to resolve it? thanks 2024-02-27T01:10:03,216 [log_pipeline-sink-worker-2-thread-1] WARN org.opensearch.dataprepper.plugins.sink.opensearch.BulkRetryStrategy - Bulk Operation Failed. data-prepper | software.amazon.awssdk.core.exception.SdkClientException: Unable to load credentials from any of the providers in the chain AwsCredentialsProviderChain(credentialsProvid ers=[SystemPropertyCredentialsProvider(), EnvironmentVariableCredentialsProvider(), WebIdentityTokenCredentialsProvider(), ProfileCredentialsProvider(profileName=default, profileFile=ProfileFile(sections =[])), ContainerCredentialsProvider(), InstanceProfileCredentialsProvider()]) : [SystemPropertyCredentialsProvider(): Unable to load credentials from system settings. Access key must be specified either via environment variable (AWS_ACCESS_KEY_ID) or system property (aws.accessKeyId)., EnvironmentVariableCredentialsProvider(): Unable to load credentials from system settings. Access key must be specified either via environment variable (AWS_ACCESS_KEY_ID) or system property (aws.accessKeyId)., WebIdentityTokenCredentialsProvider(): Either the environment variable AWS_WEB_IDENTITY_TOKEN_FILE or the javap roperty aws.webIdentityTokenFile must be set., ProfileCredentialsProvider(profileName=default, profileFile=ProfileFile(sections=[])): Profile file contained no credentials for profile 'default': ProfileF ile(sections=[]), ContainerCredentialsProvider(): Failed to load credentials from metadata service., InstanceProfileCredentialsProvider(): Failed to load credentials from IMDS.] data-prepper | at software.amazon.awssdk.core.exception.SdkClientException$BuilderImpl.build(SdkClientException.java:111) ~[sdk-core-2.21.23.jar:?] data-prepper | at software.amazon.awssdk.auth.credentials.AwsCredentialsProviderChain.resolveCredentials(AwsCredentialsProviderChain.java:130) ~[auth-2.21.23.jar:?] data-prepper | at software.amazon.awssdk.auth.credentials.internal.LazyAwsCredentialsProvider.resolveCredentials(LazyAwsCredentialsProvider.java:45) ~[auth-2.21.23.jar:?] data-prepper | at software.amazon.awssdk.auth.credentials.DefaultCredentialsProvider.resolveCredentials(DefaultCredentialsProvider.java:128) ~[auth-2.21.23.jar:?] data-prepper | at software.amazon.awssdk.auth.credentials.AwsCredentialsProvider.resolveIdentity(AwsCredentialsProvider.java:54) ~[auth-2.21.23.jar:?] data-prepper | at software.amazon.awssdk.identity.spi.IdentityProvider.resolveIdentity(IdentityProvider.java:60) ~[identity-spi-2.21.23.jar:?] data-prepper | at software.amazon.awssdk.awscore.internal.authcontext.AwsCredentialsAuthorizationStrategy.lambda$resolveCredentials$2(AwsCredentialsAuthorizationStrategy.java:112 ) ~[aws-core-2.21.23.jar:?] data-prepper | at software.amazon.awssdk.core.internal.util.MetricUtils.measureDuration(MetricUtils.java:56) ~[sdk-core-2.21.23.jar:?] data-prepper | at software.amazon.awssdk.awscore.internal.authcontext.AwsCredentialsAuthorizationStrategy.resolveCredentials(AwsCredentialsAuthorizationStrategy.java:112) ~[aws-c ore-2.21.23.jar:?] data-prepper | at software.amazon.awssdk.awscore.internal.authcontext.AwsCredentialsAuthorizationStrategy.addCredentialsToExecutionAttributes(AwsCredentialsAuthorizationStrategy. java:85) ~[aws-core-2.21.23.jar:?] data-prepper | at software.amazon.awssdk.awscore.internal.AwsExecutionContextBuilder.invokeInterceptorsAndCreateExecutionContext(AwsExecutionContextBuilder.java:137) ~[aws-core-2 .21.23.jar:?] data-prepper | at software.amazon.awssdk.awscore.client.handler.AwsSyncClientHandler.invokeInterceptorsAndCreateExecutionContext(AwsSyncClientHandler.java:67) ~[aws-core-2.21.23. jar:?] data-prepper | at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.lambda$execute$1(BaseSyncClientHandler.java:76) ~[sdk-core-2.21.23.jar:?] data-prepper | at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.measureApiCallSuccess(BaseSyncClientHandler.java:182) ~[sdk-core-2.21.23.jar:?] data-prepper | at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.execute(BaseSyncClientHandler.java:74) ~[sdk-core-2.21.23.jar:?] data-prepper | at software.amazon.awssdk.core.client.handler.SdkSyncClientHandler.execute(SdkSyncClientHandler.java:45) ~[sdk-core-2.21.23.jar:?] data-prepper | at software.amazon.awssdk.awscore.client.handler.AwsSyncClientHandler.execute(AwsSyncClientHandler.java:53) ~[aws-core-2.21.23.jar:?] data-prepper | at software.amazon.awssdk.services.sts.DefaultStsClient.assumeRole(DefaultStsClient.java:272) ~[sts-2.21.23.jar:?] data-prepper | at software.amazon.awssdk.services.sts.auth.StsAssumeRoleCredentialsProvider.getUpdatedCredentials(StsAssumeRoleCredentialsProvider.java:73) ~[sts-2.21.23.jar:?] data-prepper | at software.amazon.awssdk.services.sts.auth.StsCredentialsProvider.updateSessionCredentials(StsCredentialsProvider.java:92) ~[sts-2.21.23.jar:?] data-prepper | at software.amazon.awssdk.utils.cache.CachedSupplier.lambda$jitteredPrefetchValueSupplier$8(CachedSupplier.java:300) ~[utils-2.21.23.jar:?] data-prepper | at software.amazon.awssdk.utils.cache.CachedSupplier$PrefetchStrategy.fetch(CachedSupplier.java:448) ~[utils-2.21.23.jar:?] data-prepper | at software.amazon.awssdk.utils.cache.CachedSupplier.refreshCache(CachedSupplier.java:208) ~[utils-2.21.23.jar:?] data-prepper | at software.amazon.awssdk.utils.cache.CachedSupplier.get(CachedSupplier.java:135) ~[utils-2.21.23.jar:?] data-prepper | at software.amazon.awssdk.services.sts.auth.StsCredentialsProvider.resolveCredentials(StsCredentialsProvider.java:105) ~[sts-2.21.23.jar:?] data-prepper | at org.opensearch.client.transport.aws.AwsSdk2Transport.prepareRequest(AwsSdk2Transport.java:342) ~[opensearch-java-2.8.1.jar:?] data-prepper | at org.opensearch.client.transport.aws.AwsSdk2Transport.performRequest(AwsSdk2Transport.java:189) ~[opensearch-java-2.8.1.jar:?] data-prepper | at org.opensearch.client.opensearch.OpenSearchClient.bulk(OpenSearchClient.java:215) ~[opensearch-java-2.8.1.jar:?] data-prepper | at org.opensearch.dataprepper.plugins.sink.opensearch.bulk.OpenSearchDefaultBulkApiWrapper.bulk(OpenSearchDefaultBulkApiWrapper.java:18) ~[opensearch-2.6.1.jar:?] data-prepper | at org.opensearch.dataprepper.plugins.sink.opensearch.OpenSearchSink.lambda$doInitializeInternal$3(OpenSearchSink.java:251) ~[opensearch-2.6.1.jar:?] data-prepper | at org.opensearch.dataprepper.plugins.sink.opensearch.BulkRetryStrategy.handleRetry(BulkRetryStrategy.java:285) ~[opensearch-2.6.1.jar:?] data-prepper | at org.opensearch.dataprepper.plugins.sink.opensearch.BulkRetryStrategy.execute(BulkRetryStrategy.java:195) ~[opensearch-2.6.1.jar:?] data-prepper | at org.opensearch.dataprepper.plugins.sink.opensearch.OpenSearchSink.lambda$flushBatch$12(OpenSearchSink.java:487) ~[opensearch-2.6.1.jar:?] data-prepper | at io.micrometer.core.instrument.composite.CompositeTimer.record(CompositeTimer.java:141) ~[micrometer-core-1.11.3.jar:1.11.3] data-prepper | at org.opensearch.dataprepper.plugins.sink.opensearch.OpenSearchSink.flushBatch(OpenSearchSink.java:484) ~[opensearch-2.6.1.jar:?] data-prepper | at org.opensearch.dataprepper.plugins.sink.opensearch.OpenSearchSink.doOutput(OpenSearchSink.java:453) ~[opensearch-2.6.1.jar:?] data-prepper | at org.opensearch.dataprepper.model.sink.AbstractSink.lambda$output$0(AbstractSink.java:67) ~[data-prepper-api-2.6.1.jar:?] data-prepper | at io.micrometer.core.instrument.composite.CompositeTimer.record(CompositeTimer.java:141) ~[micrometer-core-1.11.3.jar:1.11.3] data-prepper | at org.opensearch.dataprepper.model.sink.AbstractSink.output(AbstractSink.java:67) ~[data-prepper-api-2.6.1.jar:?] data-prepper | at org.opensearch.dataprepper.pipeline.Pipeline.lambda$publishToSinks$5(Pipeline.java:349) ~[data-prepper-core-2.6.1.jar:?] data-prepper | at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) [?:?] data-prepper | at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?] data-prepper | at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) [?:?] data-prepper | at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) [?:?] data-prepper | at java.base/java.lang.Thread.run(Thread.java:840) [?:?] data-prepper | 2024-02-27T01:10:03,231 [log_pipeline-sink-worker-2-thread-1] WARN org.opensearch.dataprepper.plugins.sink.opensearch.OpenSearchSink - Document failed to write to OpenS earch with error code 0. Configure a DLQ to save failed documents. Error: Unable to load credentials from any of the providers in the chain AwsCredentialsProviderChain(credentialsProviders=[SystemPropert yCredentialsProvider(), EnvironmentVariableCredentialsProvider(), WebIdentityTokenCredentialsProvider(), ProfileCredentialsProvider(profileName=default, profileFile=ProfileFile(sections=[])), ContainerCr edentialsProvider(), InstanceProfileCredentialsProvider()]) : [SystemPropertyCredentialsProvider(): Unable to load credentials from system settings. Access key must be s
@user-tc1st4hr6d
@user-tc1st4hr6d 5 месяцев назад
docker-compose.yaml : data-prepper: container_name: data-prepper image: opensearchproject/data-prepper:latest environment: - AWS_CONTAINER_CREDENTIALS_FULL_URI=localhost/get-credential - AWS_CONTAINER_CREDENTIALS_RELATIVE_URI=/get-credentials?a=1 volumes: - ./log_pipeline.yaml:/usr/share/data-prepper/pipelines/log_pipeline.yaml ports: - 2021:2021 networks: - opensearch-net log_pipeline.yaml : log_pipeline: source: http: ssl: false processor: - grok: match: log: [ "%{COMMONAPACHELOG}" ] sink: - opensearch: hosts: [ "localhost:9200" ] insecure: true username: admin password: admin index: nginx_logs aws: serverless: true sts_role_arn: "arn:aws:iam::<AccountId>:role/PipelineRole" region: "us-east-1"
@user-tc1st4hr6d
@user-tc1st4hr6d 5 месяцев назад
hi , can you help me ? i run it on ubuntu system .then it report
@MiddlewareTechnologies
@MiddlewareTechnologies 5 месяцев назад
Hi @user-tc1st4hr6d , the playbooks and roles for this task are written for rpm based distro. It wont work for ubuntu distro
@prashanthsai6992
@prashanthsai6992 6 месяцев назад
and what if we dont use docker and do it
@MiddlewareTechnologies
@MiddlewareTechnologies 5 месяцев назад
HI @prashanthsai6992, ultimately these are images. You may want to try running the containerised services using podman and see how it works.
@ratri_kahaniya
@ratri_kahaniya 6 месяцев назад
thank you that's finally worked.
@user-tc1st4hr6d
@user-tc1st4hr6d 6 месяцев назад
hello, i follow your tutorial step by step , but the last result don't have syslog . the reuslt show as following: Feb 07 15:34:54 u22-l2-dev collectd[61934]: plugin_load: plugin "irq" successfully loaded. Feb 07 15:34:54 u22-l2-dev collectd[61934]: plugin_load: plugin "load" successfully loaded. Feb 07 15:34:54 u22-l2-dev collectd[61934]: plugin_load: plugin "memory" successfully loaded. Feb 07 15:34:54 u22-l2-dev collectd[61934]: plugin_load: plugin "processes" successfully loaded. Feb 07 15:34:54 u22-l2-dev collectd[61934]: plugin_load: plugin "rrdtool" successfully loaded. Feb 07 15:34:54 u22-l2-dev collectd[61934]: plugin_load: plugin "swap" successfully loaded. Feb 07 15:34:54 u22-l2-dev collectd[61934]: plugin_load: plugin "users" successfully loaded. Feb 07 15:34:54 u22-l2-dev collectd[61934]: Systemd detected, trying to signal readiness. Feb 07 15:34:54 u22-l2-dev systemd[1]: Started Statistics collection and monitoring dae
@user-tc1st4hr6d
@user-tc1st4hr6d 6 месяцев назад
cat roles/linux_configure_collectd/templates/collectd.conf LoadPlugin syslog LoadPlugin cpu LoadPlugin interface LoadPlugin load LoadPlugin memory Include "/etc/collectd" #Include "/etc/collectd.d"
@user-tc1st4hr6d
@user-tc1st4hr6d 6 месяцев назад
where can i get the project scripts?
@MiddlewareTechnologies
@MiddlewareTechnologies 6 месяцев назад
Hi @user-tc1st4hr6d , its just plain commands that were executed.
@user-tc1st4hr6d
@user-tc1st4hr6d 6 месяцев назад
where can i get the project files ?
@MiddlewareTechnologies
@MiddlewareTechnologies 6 месяцев назад
Hi @user-tc1st4hr6d , you can find the code at github.com/novicejava1/learnansible/tree/main/fluentbit
@user-tc1st4hr6d
@user-tc1st4hr6d 5 месяцев назад
thanks@@MiddlewareTechnologies
@user-tc1st4hr6d
@user-tc1st4hr6d 5 месяцев назад
it seems that code doesn't exsit@@MiddlewareTechnologies
@user-tc1st4hr6d
@user-tc1st4hr6d 6 месяцев назад
hi, i try to run it on ubutun 22.04 and it reports following messages:Creating opensearch-node2 ... Creating opensearch-node1 ... done Creating opensearch-dashboards ... Creating fluent-bit ... error Creating data-prepper ... Creating opensearch-node2 ... done ERROR: for fluent-bit Cannot start service fluent-bit: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: eCreating opensearch-dashboards ... done it/etc/fluent-bit.conf (via /proc/self/fd/6), flags: 0x5000: not a directory: unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the eCreating data-prepper ... done ERROR: for fluent-bit Cannot start service fluent-bit: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error mounting "/home/dingyz/opensearch-project/fluent-bit.conf" to rootfs at "/fluent-bit/etc/fluent-bit.conf": mount /home/dingyz/opensearch-project/fluent-bit.conf:/fluent-bit/etc/fluent-bit.conf (via /proc/self/fd/6), flags: 0x5000: not a directory: unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type ERROR: Encountered errors while bringing up the project.
@user-tc1st4hr6d
@user-tc1st4hr6d 6 месяцев назад
i have solved this issue,but now it reports data-prepper | 2024-02-05T02:50:00,438 [log-pipeline-sink-worker-2-thread-1] INFO org.opensearch.dataprepper.plugins.sink.opensearch.ConnectionConfiguration - Using the trust all strategy data-prepper | 2024-02-05T02:50:00,440 [log-pipeline-sink-worker-2-thread-1] WARN org.opensearch.dataprepper.plugins.sink.opensearch.OpenSearchSink - Failed to initialize OpenSearch sink, retrying: Connection refused data-prepper | 2024-02-05T02:50:01,441 [log-pipeline-sink-worker-2-thread-1] INFO org.opensearch.dataprepper.pipeline.Pipeline - Pipeline [log-pipeline] - sink is not ready for execution, retrying
@user-tc1st4hr6d
@user-tc1st4hr6d 6 месяцев назад
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:499) ~[netty-codec-4.1.100.Final.jar:4.1.100.Final] opensearch-node2 | at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:290) ~[netty-codec-4.1.100.Final.jar:4.1.100.Final] opensearch-node2 | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) [netty-transport-4.1.100.Final.jar:4.1.100.Final] opensearch-node2 | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) [netty-transport-4.1.100.Final.jar:4.1.100.Final] opensearch-node2 | at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) [netty-transport-4.1.100.Final.jar:4.1.100.Final] opensearch-node2 | at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [netty-transport-4.1.100.Final.jar:4.1.100.Final] opensearch-node2 | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) [netty-transport-4.1.100.Final.jar:4.1.100.Final] opensearch-node2 | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) [netty-transport-4.1.100.Final.jar:4.1.100.Final] opensearch-node2 | at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [netty-transport-4.1.100.Final.jar:4.1.100.Final] opensearch-node2 | at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) [netty-transport-4.1.100.Final.jar:4.1.100.Final] opensearch-node2 | at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) [netty-transport-4.1.100.Final.jar:4.1.100.Final] opensearch-node2 | at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:689) [netty-transport-4.1.100.Final.jar:4.1.100.Final] opensearch-node2 | at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:652) [netty-transport-4.1.100.Final.jar:4.1.100.Final] opensearch-node2 | at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) [netty-transport-4.1.100.Final.jar:4.1.100.Final] opensearch-node2 | at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) [netty-common-4.1.100.Final.jar:4.1.100.Final] opensearch-node2 | at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.100.Final.jar:4.1.100.Final] opensearch-node2 | at java.lang.Thread.run(Thread.java:833) [?:?] opensearch-node2 | Caused by: io.netty.handler.ssl.NotSslRecordException: not an SSL/TLS record: 474554202f5f6e6f6465733f66696c7465725f706174683d6e6f6465732e2a2e76657273696f6e253
@user-tc1st4hr6d
@user-tc1st4hr6d 6 месяцев назад
how to config yml to recieve logs from kakfa and write to opensearch
@MiddlewareTechnologies
@MiddlewareTechnologies 6 месяцев назад
Hi @user-tc1st4hr6d, i will try to do a poc on that. thanks
@user-tc1st4hr6d
@user-tc1st4hr6d 4 месяца назад
@@MiddlewareTechnologies when will you finish it ? thanks. now i'm trying to do it but failed
@homewolf
@homewolf 7 месяцев назад
Hey there, thanks for sharing this. Just an fyi it looks like the data for the `linux_ping` file are missing from your blog post and video.
@MiddlewareTechnologies
@MiddlewareTechnologies 6 месяцев назад
thanks @homewolf, indeed i missed it. updated in my blog post now.
@kangnifredable
@kangnifredable 7 месяцев назад
hello thank you for your video. i have nginx and tomcat installed on my vps server and my fecth send me this issue The 'Access-Control-Allow-Origin' header contains multiple values '*, *, localhost, capacitor://localhost', but only one is allowed. Have the server send the header with a valid value, or, if an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled. How to solve it pls?
@meddebimtiez5733
@meddebimtiez5733 7 месяцев назад
if i use a gitlab instance under domain/gitlab , how can i change it into https config. Im using kubernetes ingress to go into gitlab
@MiddlewareTechnologies
@MiddlewareTechnologies 7 месяцев назад
Hi @meddebimtiez5733, assuming you are using a gitlab instance hosted on gitlab.com. You can have a look at this name based virtual hosting usage in ingress to route your requests to external services. Here is the link - kubernetes.io/docs/concepts/services-networking/ingress/#name-based-virtual-hosting. Hope this helps.
@apnihorrorduniya
@apnihorrorduniya 7 месяцев назад
Great effort sir
@elyabin
@elyabin 7 месяцев назад
Great video , thank you
@cashedge1
@cashedge1 7 месяцев назад
There is no way I can thankyou for this video, in whole you tube no one told about role why Setting is not visible for my user.All points are explained superbly. Thank you for all your selfless efforts for putting up this video.
@PRAMODKUMAR-rz4wv
@PRAMODKUMAR-rz4wv 7 месяцев назад
Great Content, Hi can we do installation on RHEL 8 VM , Can you please share steps for the same.
@MiddlewareTechnologies
@MiddlewareTechnologies 7 месяцев назад
hi @PRAMODKUMAR-rz4wv, apisix is officially supported on centos7/8. so ideally the same repo's should work on rhel7/8 as centos is derived version of rhel. Here is the link - apisix.apache.org/docs/apisix/installation-guide/
@abessesmahi4888
@abessesmahi4888 7 месяцев назад
Please increase the font size, it's too small Thank you so much for your efforts
@MiddlewareTechnologies
@MiddlewareTechnologies 7 месяцев назад
sorry about that .. will ensure to do it when i record
@aizhamalnazhimidinova1240
@aizhamalnazhimidinova1240 8 месяцев назад
Hey I have 28 vulnerabilities. How do I fix that, is there automation?
@MiddlewareTechnologies
@MiddlewareTechnologies 8 месяцев назад
@aizhamalnazhimidinova1240 the openscap vulnerability report provides with the details of how they can be fixed. But you need to get them fixed.
@user-zt2vd5jg8b
@user-zt2vd5jg8b 8 месяцев назад
hi, I am having an issue regarding data-prepper integration, upon running docker compose file, the container is started but when i check it's log i have following error 2023-12-06T07:56:29,348 [log-pipeline-sink-worker-2-thread-1] INFO org.opensearch.dataprepper.pipeline.Pipeline - Pipeline [log-pipeline] - sink is not ready for execution, retrying 2023-12-06T07:56:29,349 [log-pipeline-sink-worker-2-thread-1] INFO org.opensearch.dataprepper.plugins.sink.opensearch.OpenSearchSink - Initializing OpenSearch sink 2023-12-06T07:56:29,349 [log-pipeline-sink-worker-2-thread-1] INFO org.opensearch.dataprepper.plugins.sink.opensearch.ConnectionConfiguration - Using the username provided in the config. 2023-12-06T07:56:29,350 [log-pipeline-sink-worker-2-thread-1] INFO org.opensearch.dataprepper.plugins.sink.opensearch.ConnectionConfiguration - Using the trust all strategy 2023-12-06T07:56:29,360 [log-pipeline-sink-worker-2-thread-1] WARN org.opensearch.dataprepper.plugins.sink.opensearch.OpenSearchSink - Failed to initialize OpenSearch sink, retrying: opensearch.stack.com
@tamilselvan8343
@tamilselvan8343 8 месяцев назад
Nice...
@andrevm9410
@andrevm9410 8 месяцев назад
okay, okay, okay, okay?
@balajisurabhi2358
@balajisurabhi2358 9 месяцев назад
Good it's help a lot! Thank you!
@GianluigiBiancucci
@GianluigiBiancucci 9 месяцев назад
Im this way cloning any repository, results in an error , saying that "CA file" is missing. Even thou though web interface it s working like you showed.
@MiddlewareTechnologies
@MiddlewareTechnologies 9 месяцев назад
Hi @GianluigiBiancucci, it because the self signed certificates not are trusted. You can update the git config with the ca certificate file path and try to clone. Ref - git-scm.com/docs/git-config#Documentation/git-config.txt-httpsslCAPath
@rubendariozarazuamartinez8660
@rubendariozarazuamartinez8660 9 месяцев назад
Now, this is also NOT working. Getting error in build process
@ccl4872
@ccl4872 9 месяцев назад
Hey there, thank you for the video, I did everything just like in the video but I keep getting an error. I don't know what the error is about, it just has a lot of "Traceback (most recent call)" and one of those error says "configparser. MissingSectionHeaderError: File contains no section headers" referencing my buildozer.spec file. Any chance you could at it? and suggest some changes. Thank you very much .
@MiddlewareTechnologies
@MiddlewareTechnologies 9 месяцев назад
Hi it may be because buildler.spec file parameters naming may got changed. I did this a year ago almost. If you can share it in the gist I can have a look
@ccl4872
@ccl4872 9 месяцев назад
@@MiddlewareTechnologies Thank you for the quick response, if I can do anything else let me know. Thank you for your time
@ccl4872
@ccl4872 9 месяцев назад
@@MiddlewareTechnologies YT keeps deleting my comments. I wanted to share my repository with you. I was saying that I can add you as a collaborator and you can take a look or if there is any other way I can share it with you. Sorry for the trouble.
@user-yd9tt3ee8x
@user-yd9tt3ee8x 9 месяцев назад
Very good video..................
@MiddlewareTechnologies
@MiddlewareTechnologies 9 месяцев назад
Thanks for watching
@sagarhm2237
@sagarhm2237 10 месяцев назад
How use bare token to acces the cluster from local machine
@user-vh8kg6sr7e
@user-vh8kg6sr7e 10 месяцев назад
Thanks a lot. Routing for localhost not working for me as weel, now I came to know the reason. Couls you please elaborte a little bit the reason or any solution for that?
@MiddlewareTechnologies
@MiddlewareTechnologies 10 месяцев назад
Hi @user-vh8kg6sr7e, backend flask application are launched separately on the host and access used host FQDN. The apisix service is running in a container and if you are routing to localhost it will try to resolve for backend service within the container but it will fail. You need to route to a backend service with a FQDN or IP which you can resolve from within the apisix gateway. You can follow my blog with the step by step procedure outlined - middlewaretechnologies.in/2023/03/how-to-use-opensource-apache-apisix-as-an-api-gateway.html. Link also available in the description.
@user-du6hs8fe8x
@user-du6hs8fe8x 10 месяцев назад
Does not work for me. I copy & pasted everything. I have a HyperV VM which does use an Internal Switch with a working Gitlab instance. Once I perform these steps I cannot access it anymore with the in the config specified domain. You run Gitlab on the same machine as the Browser where you type the domain right?
@CaHeoMapMap
@CaHeoMapMap 10 месяцев назад
Cool
@user-id1hf2do9y
@user-id1hf2do9y 10 месяцев назад
thank you bro, you helped me a lot.
@kalashnikov203
@kalashnikov203 10 месяцев назад
Thank you