android add live camera effects to custom camera library CameraView
I'm creating an application with the features of live camera effects. I've used custom camera library ( https://github.com/natario1/CameraView#capturing-video ) for making images and videos. Now the issue is that I want to apply live camera effects to this camera preview. Searched alot but couldn't find any solution. Kindly give your suggestions.
cameraView.addFrameProcessor(new FrameProcessor() {
@Override
@WorkerThread
public void process(Frame frame) {
byte[] data = frame.getData();
int rotation = frame.getRotation();
long time = frame.getTime();
Size size = frame.getSize();
int format = frame.getFormat();
// Process...
}
}
this is method for frame processing
See also questions close to this topic
-
Send message to specific contact on Viber using Intent on Android App
Is it possible to send message to specific contact without choosing it after the intent send you to the Viber contact list? I want to send it directly.
- Reduce memory and high cpu usage in Android
-
Which one of these methods is efficient way to populate a recyclerview?(WeakReference vs Strong Reference)
I am populating my recyclerview using
SQLite
and this population is done under a new thread using Runnable.Here is the code:
private class BackgroundRunnable internal constructor(context: NotesFrag) : Runnable { private val weakReference = WeakReference(context) override fun run() { val parentClass = weakReference.get() parentClass!!.list!!.clear() parentClass.listItems!!.clear() parentClass.list = parentClass.dbHandler!!.readAllNotes() if(parentClass.list!!.size == 0) return for (noteReader in parentClass.list!!.iterator()) { // val note = UserNotes() //USING STRONG REFERENCE OR val weakReferrerNote: WeakReference<UserNotes> = WeakReference(UserNotes()) val note = weakReferrerNote.get() //USING WEAK REFERENCE? note!!.noteTitle = noteReader.noteTitle note.noteText = noteReader.noteText note.noteID = noteReader.noteID note.noteColor = noteReader.noteColor note.noteEncrypted = noteReader.noteEncrypted note.noteCheckList = noteReader.noteCheckList note.noteDate = noteReader.noteDate note.noteTempDel = noteReader.noteTempDel note.noteArchived = noteReader.noteArchived note.noteReminderID = noteReader.noteReminderID note.noteReminderText = noteReader.noteReminderText parentClass.listItems!!.add(note) } parentClass.activity!!.runOnUiThread(object : Runnable { override fun run() { parentClass.adapter!!.notifyDataSetChanged() parentClass.list!!.clear() parentClass.list = null return } }) } }
As you can see, there are two ways to get objects in for loop(added the comments). I know the difference between a strong reference and a weak reference however I am little confused that if I use a weak reference to generate an object, what if it gets garbage collected during the population? Would it have any effect on the way recyclerview gets populated? Thinking in worst case scenario(imagine a slow user device with low memory), is this the correct way to do it?
The other method is using strong references like
val note = Usernotes()
. This means that it will not be easily garbage collected and memory consumption would increase. Thinking about worst case here as well, imagine a user with 100+ notes, that means 100+ strong objects created hence waste of memory? I am trying to make my app as efficient as possible by trying to think of worst cases and implement an algorithm that could handle maximum numbers of worst cases efficiently. -
how to set camera adjustments (focus, apperture, white balance)
I am beginner. I am working with pinhole cameras. I have set a basic questions: 1) Focus I know that I can set the focus manually. Can I do that with software/programming side ? How can I check that focus is okay ? only visual check ? Could you advice any algorithm ?
2) If I have a camera always on the same position. Is it worth to perform autofocus ?
3) White balance How to calculate optimal values of RGB ? How can I check white balance in the image ?
5) About camera apperture
I would be appreciate for any help please.
-
Ubuntu 18.04 doesn't recognize my IDS USB Camera
I'm working on a Computer Vision project, and I'd like to use an IDS usb camera on my Ubuntu 18.04, the problem is that my Ubuntu doesn't detect it, even if I followed all the uEye's instructions.
The output of this command line
lsusb
is :Bus 001 Device 006: ID 0bda:0129 Realtek Semiconductor Corp. RTS5129 Card Reader Controller Bus 001 Device 005: ID 0a5c:21d7 Broadcom Corp. BCM43142 Bluetooth 4.0 Bus 001 Device 004: ID 0c45:670b Microdia Bus 001 Device 008: ID 1ea7:0066
Bus 001 Device 002: ID 8087:8001 Intel Corp. Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root huband the output of the command line
gvfs-mount -l
is :This tool has been deprecated, use 'gio mount' instead. See 'gio help mount' for more info. Drive(0): WDC WD7500BPVT-80HXZT1 Type: GProxyDrive (GProxyVolumeMonitorUDisks2) Drive(1): HL-DT-ST DVD+/-RW GU90N Type: GProxyDrive (GProxyVolumeMonitorUDisks2)
Can you help me, please ?
I saw in some posts that we can reboot the system to resolve the issue, but i don't have the time to reboot and reinstall everything, and I don't want to loose files... Thank you for the -1, but i need help not judgement :)
- how to perform lens distortion correction on these image?
-
How to add a special effects for android GUI
I am a beginner Android Studio user. The special effect that is in my head is this: Imagine you are equipping a night-mode glasses and the glasses are initializing to the environment, so during that period, a green film sort of "refreshes" the scene with an aesthetic effect. I know this sounds kind of vague, but could not think of a better way to explain or where to start.
I want to use this effect on Android Studio, any suggestions on where to start or what libraries that I can use?
-
How to generate main effects for glmmTMB generalised linear mixed models
I have run a series of generalised linear mixed models using glmmTMB in order to assess differences in behaviour of animals at different facilities. The saturated model involves a full-cross of main effects, one random effect (random intercept), one nested effect and two continuous predictors (see code below). I want to assess the significance of the main effects in the model (three main effects representing two primary effects and their interaction effect: behaviour, facility, behaviour:facility) and am using anova() in the base package to compare models against one another but keep getting Chi-squared values and degrees of freedom that seem not to make sense. None of the models give errors when run on their own but the comparisons using anova() are not making sense. I don't understand why or what I should be doing to get around this.
Given that the models have been run using the glmmTMB package in R, I cannot use the Anova() function from the car package as glmmTMB model objects are not supported and it generates error messages when this is done. I have tried using the package glmmADMB as well but found it to be far slower (some models literally taking hours to run) and the model comparisons were just as non-sensical.
My code:
rm() detach() library(glmmTMB) dataset<-read.csv("Dur18Bletters.csv", header=T) attach(dataset) names(dataset) mod1<-glmmTMB(Count~Facility+Beh+Facility:Beh+(1|Sex)+(1|ID/Day)+Age+Attendance,family=nbinom2) mod2<-glmmTMB(Count~Facility+Beh+(1|Sex)+(1|ID/Day)+Age+Attendance,family=nbinom2) mod3<-glmmTMB(Count~Facility+Facility:Beh+(1|Sex)+(1|ID/Day)+Age+Attendance,family=nbinom2) mod4<-glmmTMB(Count~Beh+Facility:Beh+(1|Sex)+(1|ID/Day)+Age+Attendance,family=nbinom2) anova(mod1,mod2,test="LRT") anova(mod1,mod3,test="LRT") anova(mod1,mod4,test="LRT")
The output of the three deviance tests:
Data: NULL Models: mod2: Count ~ Facility + Beh + (1 | Sex) + (1 | ID/Day) + Age + People, zi=~0, disp=~1 mod1: Count ~ Facility + Beh + Facility:Beh + (1 | Sex) + (1 | ID/Day) + , zi=~0, disp=~1 mod1: Age + People, zi=~0, disp=~1 Df AIC BIC logLik deviance Chisq Chi Df Pr(>Chisq) mod2 26 19672 19823 -9810.0 19620 mod1 60 19447 19795 -9663.6 19327 292.69 34 < 2.2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 > anova(mod1,mod3,test="LRT") Data: NULL Models: mod1: Count ~ Facility + Beh + Facility:Beh + (1 | Sex) + (1 | ID/Day) + , zi=~0, disp=~1 mod1: Age + People, zi=~0, disp=~1 mod3: Count ~ Facility + Facility:Beh + (1 | Sex) + (1 | ID/Day) + , zi=~0, disp=~1 mod3: Age + People, zi=~0, disp=~1 Df AIC BIC logLik deviance Chisq Chi Df Pr(>Chisq) mod1 60 19447 19795 -9663.6 19327 mod3 60 19447 19795 -9663.6 19327 0 0 1 > anova(mod1,mod4,test="LRT") Data: NULL Models: mod1: Count ~ Facility + Beh + Facility:Beh + (1 | Sex) + (1 | ID/Day) + , zi=~0, disp=~1 mod1: Age + People, zi=~0, disp=~1 mod4: Count ~ Beh + Facility:Beh + (1 | Sex) + (1 | ID/Day) + Age + , zi=~0, disp=~1 mod4: People, zi=~0, disp=~1 Df AIC BIC logLik deviance Chisq Chi Df Pr(>Chisq) mod1 60 19447 19795 -9663.6 19327 mod4 60 19447 19795 -9663.6 19327 0 0 < 2.2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
When I examine the first model comparison (i.e. anova(mod1,mod2)) the output seems reasonable and credible. I would thus conclude that the main effect that was removed in mod2 made a significant contribution to the model as evidenced by a significant difference between the two models. It is not possible that a chi-squared value of 0 will generate a p-value of <0.0001.
Any assistance that you can provide would be greatly appreciated! I'm not particularly committed to this means of estimating main effects (it's the only method I could find while browsing the forums) so if anyone has an alternative package/method that I could try, that would be great! I am at a complete loss.
-
Custom effect for entry. Clear text button
I am creating a custom entry and have everything almost working correctly. On most of my entrys, the x button will show and hide when there is text or no text in the entry. However for some reason when I click on one of my entrys, it clears whatever is in it.
Here is where I'm checking if my x is tapped:
public class OnDrawableTouchListener : Java.Lang.Object, Android.Views.View.IOnTouchListener { public bool OnTouch(Android.Views.View v, MotionEvent e) { if (v is EditText && e.Action == MotionEventActions.Up) { EditText editText = (EditText)v; if (!editText.Text.Equals("")) editText.SetCompoundDrawablesRelativeWithIntrinsicBounds(0, 0, Resource.Drawable.Subtraction20, 0); if (editText.GetCompoundDrawables()[2] != null) { //get actual position of tap if (e.RawX >= (editText.Right - editText.GetCompoundDrawables()[2].Bounds.Width())) { //clear entry editText.Text = string.Empty; return true; } } } return false; } }